I kind of have two real questions. Both relate to this code:
import urllib
import requests
def query(q):
base_url = "https://api.duckduckgo.com/?q={}&format=json"
resp = requests.get(base_url.format(urllib.parse.quote(q)))
json = resp.json()
return json
One is this: When I query something like this: "US Presidents", I get back something like this:
{'Abstract': '', 'AbstractSource': '', 'AbstractText': '', 'AbstractURL': '', 'Answer': '', 'AnswerType': '', 'Definition': '', 'DefinitionSource': '', 'DefinitionURL': '', 'Entity': '', 'Heading': '', 'Image': '', 'ImageHeight': '', 'ImageIsLogo': '', 'ImageWidth': '', 'Infobox': '', 'Redirect': '', 'RelatedTopics': [], 'Results': [], 'Type': '', 'meta': {'attribution': None, 'blockgroup': None, 'created_date': '2021-03-24', 'description': 'testing', 'designer': None, 'dev_date': '2021-03-24', 'dev_milestone': 'development', 'developer': [{'name': 'zt', 'type': 'duck.co', 'url': 'https://duck.co/user/zt'}], 'example_query': '', 'id': 'just_another_test', 'is_stackexchange': 0, 'js_callback_name': 'another_test', 'live_date': None, 'maintainer': {'github': ''}, 'name': 'Just Another Test', 'perl_module': 'DDG::Lontail::AnotherTest', 'producer': None, 'production_state': 'offline', 'repo': 'fathead', 'signal_from': 'just_another_test', 'src_domain': 'how about there', 'src_id': None, 'src_name': 'hi there', 'src_options': {'directory': '', 'is_fanon': 0, 'is_mediawiki': 0, 'is_wikipedia': 0, 'language': '', 'min_abstract_length': None, 'skip_abstract': 0, 'skip_abstract_paren': 0, 'skip_icon': 0, 'skip_image_name': 0, 'skip_qr': '', 'src_info': '', 'src_skip': ''}, 'src_url': 'Hello there', 'status': None, 'tab': 'is this source', 'topic': [], 'unsafe': None}}
Basically, everything is empty. Even the Heading key, which I know was sent as "US Presidents" encoded into url form. This issue seems to affect all queries I send with a space in them. Even when I go to this url: "https://api.duckduckgo.com/?q=US%20Presidents&format=json&pretty=1" in a browser, all I get is a bunch of blank json keys.
My next question is this. When I send in something like this: "1+1", the json response's "Answer" key is this:
{'from': 'calculator', 'id': 'calculator', 'name': 'Calculator', 'result': '', 'signal': 'high', 'templates': {'group': 'base', 'options': {'content': 'DDH.calculator.content'}}}
Everything else seems to be correct, but the 'result' should be '2', should it not? The entire rest of the json seems to be correct, including all 'RelatedTopics'
Any help with this would be greatly appreciated.
Basically duckduckgo api is not a real search engine. It is just a dictionary. So try US%20President instead of US%20presidents and you'll get an answer. For encoding you can use blanks, but if it's not a fixed term I would prefer the plus-sign. You can do this by using urllib.parse.quote_plus()
With calculation you're right. But I see absolutely no use case to use a calculus-api within a python code. It is like using trampoline to travel to the moon if there is a rocket available. And maybe they see it the same and do not offer calc services in their api?!
Related
I'm writing a JQL query to fetch only Service Requests. I'm not able to fetch the Service Request field name from the below json using JQL Query. Any help will be appreciated
'jql': 'key=ITSM-1917' => This is working .. But I'm trying to fetch based on issuetype.name='Service Request'
{'expand': 'names,schema', 'startAt': 0, 'maxResults': 100, 'total': 1, 'issues': [{'expand': 'operations,versionedRepresentations,editmeta,changelog,renderedFields', 'id': '373234', 'self': '', 'key': 'ITSM-1917', 'fields': {'issuetype': {'self': '', 'id': '10300', 'description': 'Created by JIRA Service Desk.', 'iconUrl': '', '**name': 'Service Request**', 'subtask': False, 'avatarId': 11006}, 'assignee': {'self': '', 'name': 'IT Service Management', 'key': 'JIRAUSER10945', 'emailAddress': '', 'avatarUrls': {'48x48': 'https://www.gravatar.com/avatar/067a17d84b041546f0f658bd011bc3ba?d=mm&s=48', '24x24': 'https://www.gravatar.com/avatar/067a17d84b041546f0f658bd011bc3ba?d=mm&s=24', '16x16': 'https://www.gravatar.com/avatar/067a17d84b041546f0f658bd011bc3ba?d=mm&s=16', '32x32': 'https://www.gravatar.com/avatar/067a17d84b041546f0f658bd011bc3ba?d=mm&s=32'}, 'displayName': 'IT Service Management', 'active': True, 'timeZone': 'America/New_York'}, 'created': '2022-11-23T01:34:11.000-0500', 'status': {'self': '', 'description': 'A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, certified, or are closed.', 'iconUrl': '', 'name': 'Resolved', 'id': '5', 'statusCategory': {'self': '', 'id': 3, 'key': 'done', 'colorName': 'green', 'name': 'Done'}}}}]}
You have to use the following JQL instead:
issuetype = "Service Request"
Please check the Jira search documentation. If you meant anything else, be more specific.
I need to convert data from an api call into a dataframe. After calling the api I get the following json object:
{'responseID': 149882407,
'surveyID': 9711255,
'surveyName': 'NPS xx yy',
'ipAddress': '170.231.171.253',
'timestamp': '20 Aug, 2022 01:37:29 PM PET',
'location': {'country': None,
'region': '',
'latitude': -12.0439,
'longitude': -77.0281,
'radius': 0.0,
'countryCode': 'PE'},
'duplicate': False,
'timeTaken': 6,
'responseStatus': 'Completed',
'externalReference': '',
'customVariables': {'custom1': None,
'custom2': None,
'custom3': None,
'custom4': None,
'custom5': None},
'language': 'Spanish (Latin America)',
'currentInset': '',
'operatingSystem': 'ANDROID1',
'osDeviceType': 'MOBILE',
'browser': 'CHROME10',
'responseSet': [{'questionID': 106457509,
'questionDescription': '',
'questionCode': 'Q3',
'questionText': '¿El vendedor te recomendó algún producto adicional? ',
'imageUrl': None,
'answerValues': [{'answerID': 571204020,
'answerText': 'SI',
'value': {'scale': '1',
'other': '',
'dynamicExplodeText': '',
'text': '',
'result': '',
'fileLink': '',
'weight': 0.0}}]},
{'questionID': 106457510,
'questionDescription': '{detractor:Nada probable,promoter:Altamente probable}',
'questionCode': 'Q8',
'questionText': '¿Cuán probable es que recomiendes las tiendas Samsung a un familiar o amigo?',
'imageUrl': None,
'answerValues': [{'answerID': 571204032,
'answerText': '10',
'value': {'scale': '11',
'other': '',
'dynamicExplodeText': '',
'text': '',
'result': '',
'fileLink': '',
'weight': 0.0}}]},
{'questionID': 106457511,
'questionDescription': '',
'questionCode': 'Q6',
'questionText': '¿En qué fallamos?',
'imageUrl': None,
'answerValues': []},
{'questionID': 106457512,
'questionDescription': '',
'questionCode': 'Q4',
'questionText': '¿En qué debemos mejorar?',
'imageUrl': None,
'answerValues': []},
{'questionID': 106457513,
'questionDescription': '',
'questionCode': 'Q5',
'questionText': '¿Por qué nos felicitas?',
'imageUrl': None,
'answerValues': [{'answerID': 571204035,
'answerText': '',
'value': {'scale': '',
'other': '',
'dynamicExplodeText': '',
'text': '',
'result': '',
'fileLink': '',
'weight': 0.0}}]}],
'utctimestamp': 29}
The json file is a list of dictionaries, and each dictionary (as the one above) represents one response from a client. It is the response from a client, to a five question survey.
The information that I want to scrape is: responseID, surveyID, ipAddress, timestamp, latitude, longitude, questionText (inside responseSet), scale and text (inside answerValues). So it would look something like this:
The json object I showed represents one of the rows from the dataframe.
First I tried pd.json_normalize(), which correctly scrapes responseID, surveyID, ipAddress, timestamp, latitude, longitude and timeTaken, but since responseSet is a list, it just remains a list within the dataframe.
I tried to use to_list() to expand this column of lists into multiple columns, but this quickly got out of hand since there are dicts, within dicts, within lists, within dicts. In other words, it's very heavily nested and I need to extract the answers to five questions, which answer may be in text or scale. So I figured this wasn't the most pythonic way to do so.
Lastly I used json_normalize with answerValues as the record path, which gave me a dataframe in which every row was an individual answer. So I had 5 rows per client (sometimes less, since it wouldn't scrape the answer if the client left the question unanswered). Next I used pivot to obtain a dataframe that was closer to what I wanted, and finally merged with my previous dataframe.
def transform_json(data):
flatten_json = pd.json_normalize(data)
answers_long = pd.json_normalize(data,
record_path= ["responseSet", "answerValues"],
meta= ["responseID",
["responseSet", "questionText"]])
answers_long["value"] = answers_long["value.text"] + answers_long["value.scale"]
answers = answers_long.pivot(index= "responseID",
columns= "responseSet.questionText",
values= "value").reset_index()
df = flatten_json.merge(answers,
how= "left",
on= "responseID")
return df
I wonder what would be the best way to achieve my task since I don't think this was the best, maybe there is a way to completely flatten the json file, including the nested lists and dictionaries.
I am trying to convert a list into JSON using Python. The result from a query looks something like this :
Output:
[<Record r=<Relationship id=106 nodes=(<Node id=71 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '105', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'remove', 'sing': '', 'pl': '', 'title': 'BMProcessor', 'version': 'v6.0'}>, <Node id=59 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '93', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'A DataProcessor which represents a ML algorithm', 'sing': '', 'pl': '', 'title': 'LearningProcessor', 'version': 'v6.0'}>) type='subclass_of' properties={}> b=<Node id=59 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '93', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'A DataProcessor which represents a ML algorithm', 'sing': '', 'pl': '', 'title': 'LearningProcessor', 'version': 'v6.0'}> n=<Node id=71 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '105', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'remove', 'sing': '', 'pl': '', 'title': 'BMProcessor', 'version': 'v6.0'}>>]
Function :
def runQuery(query):
pprint.pprint(connection.execute(query))
When I perform a simple json.dumps() it return with TypeError: Object of type is not JSON serializable
I want to print a JSON format out of this. How can I do so?
You can get the result.data() which is a dictionary.
You can also iterate over the records in the cursor and convert one by one, and extract the fields you want, which is what the linked example above does - pass the cursor and use the blob['field'] syntax
def serialize_genre(genre):
return {
'id': genre['id'],
'name': genre['name'],
}
I'm unable to parse JSON. My JSON snippet returned from requests.post response :-
{'result': {'parent': '', 'reason': '', 'made_sla': 'true', 'backout_plan': '', 'watch_list': '', 'upon_reject': 'cancel', 'sys_updated_on': '2018-08-22 11:16:09', 'type': 'Comprehensive', 'conflict_status': 'Not Run', 'approval_history': '', 'number': 'CHG0030006', 'test_plan': '', 'cab_delegate': '', 'sys_updated_by': 'admin', 'opened_by': {'link': 'https://dev65345.service-now.com/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441', 'value': '6816f79cc0a8016401c5a33be04be441'}, 'user_input': '', 'requested_by_date': '', 'sys_created_on': '2018-08-22 11:16:09', 'sys_domain': {'link': 'https://dev65345.service-now.com/api/now/table/sys_user_group/global', 'value': 'global'}, 'state': '-5', 'sys_created_by': 'admin', 'knowledge': 'false', 'order': '', 'phase': 'requested', 'closed_at': '', 'cmdb_ci': '', 'delivery_plan': '', 'impact': '3', 'active': 'true', 'review_comments': '', 'work_notes_list': '', 'business_service': '', 'priority': '4', 'sys_domain_path': '/', 'time_worked': '', 'cab_recommendation': '', 'expected_start': '', 'production_system': 'false', 'opened_at': '2018-08-22 11:16:09', 'review_date': '', 'business_duration': '', 'group_list': '', 'requested_by': {'link': 'https://dev6345.service-now.com/api/now/table/sys_user/user1', 'value': 'user1'}, 'work_end': '', 'change_plan': '', 'phase_state': 'open', 'approval_set': '', 'cab_date': '', 'work_notes': '', 'implementation_plan': '', 'end_date': '', 'short_description': '', 'close_code': '', 'correlation_display': '', 'delivery_task': '', 'work_start': '', 'assignment_group': {'link': 'https://dev65345.service-now.com/api/now/table/sys_user_group/testgroup', 'value': 'testgroup'}, 'additional_assignee_list': '', 'outside_maintenance_schedule': 'false', 'description': '', 'on_hold_reason': '', 'calendar_duration': '', 'std_change_producer_version': '', 'close_notes': '', 'sys_class_name': 'change_request', 'closed_by': '', 'follow_up': '', 'sys_id': '436eda82db4023008e357a61399619ee', 'contact_type': '', 'cab_required': 'false', 'urgency': '3', 'scope': '3', 'company': '', 'justification': '', 'reassignment_count': '0', 'review_status': '', 'activity_due': '', 'assigned_to': '', 'start_date': '', 'comments': '', 'approval': 'requested', 'sla_due': '', 'comments_and_work_notes': '', 'due_date': '', 'sys_mod_count': '0', 'on_hold': 'false', 'sys_tags': '', 'conflict_last_run': '', 'escalation': '0', 'upon_approval': 'proceed', 'correlation_id': '', 'location': '', 'risk': '3', 'category': 'Other', 'risk_impact_analysis': ''}}
I searched on the net. It is showing as as it is single quotes it's not parsing.
So I tried to convert the single quotes into double quotes.
with open ('output.json','r') as handle:
handle=open('output.json')
str="123"
str=handle.stringify() #also with .str()
str = str.replace("\'", "\"")
jsonobj=json.load(json.dumps(handle))
But it shows me No attribute stringify or str as it is an json object and these are string object function. So, can you please help me with what is the correct way of parsing the json object with single quotes in a file.
The code:-
import requests
import json
from pprint import pprint
print("hello world")
url="********"
user="****"
password="*****"
headers={"Content-Type":"application/xml","Accept":"application/json"}
#response=requests.get(url,auth=(user,password),headers=headers)
response = requests.post(url, auth=(user, password), headers=headers ,data="******in xml****")
print(response.status_code)
print(response.json())
jsonobj=json.load(json.dumps(response.json()))
pprint(jsonobj)
What you receive from requests.post is not JSON, it's a dictionary.
One that can be encoded in JSON, via json.dumps(result).
JSON is a text format to represent objects (the "ON" means "object notation"). You can convert a dictionary (or list or scalar) into a JSON-encoded string, or the other way around.
What requests.post does is taking the JSON response and already parsing it (with json.loads), so you don't have to think about JSON at all.
You haven't shown the code where you get the data from the post. However, you are almost certainly doing something like this:
response = requests.post('...')
data = response.json()
Here data is already parsed from JSON to a Python dict; that is what the requests json method does. There is no need to parse it again.
If you need raw JSON rather than Python data, then don't call the json method. Get the data direct from the response:
data = response.content
Now data will be a string containing JSON.
ALSO: CAN SOMEONE tell me why it's not picking up 'website'?
I'm trying to figure out what's causing this error:
line 60:
What's causing this error??? Only arising on the first line:
UnboundLocalError: local variable 'name' referenced before assignment
Code:
import re
import json
import jsonpickle
from nameparser import HumanName
from pprint import pprint
import csv
import json
import jsonpickle
from nameparser import HumanName
from pprint import pprint
from string import punctuation, whitespace
def parse_ieca_gc(s):
########################## HANDLE NAME ELEMENT ###############################
degrees = ['M.A.T.','Ph.D.','MA','J.D.','Ed.M.', 'M.A.', 'M.B.A.', 'Ed.S.', 'M.Div.', 'M.Ed.', 'RN', 'B.S.Ed.', 'M.D.']
degrees_list = []
# check whether the name string has an area / has a comma
if ',' in s['name']:
# separate area of practice from name and degree and bind this to var 'area'
split_area_nmdeg = s['name'].split(',')
area = split_area_nmdeg.pop()
print 'split area nmdeg'
print area
print split_area_nmdeg
# Split the name and deg by spaces. If there's a deg, it will match with one of elements and will be stored deg list. The deg is removed name_deg list and all that's left is the name.
split_name_deg = re.split('\s',split_area_nmdeg[0])
for word in split_name_deg:
for deg in degrees:
if deg == word:
degrees_list.append(split_name_deg.pop())
name = ' '.join(split_name_deg)
# if the name string does not contain a comma, just parse as normal string
else:
area = []
split_name_deg = re.split('\s',s['name'])
for word in split_name_deg:
for deg in degrees:
if deg == word:
degrees_list.append(split_name_deg.pop())
name = ' '.join(split_name_deg)
# area of practice
category = area
# name
name = HumanName(name)
first_name = name.first
middle_name = name.middle
last_name = name.last
title = name.title
full_name = dict(first_name=first_name, middle_name=middle_name, last_name=last_name, title=title)
# degrees
degrees = degrees_list
# website
website = s.get('website','')
gc_ieca = dict(
name = name,
website = website,
degrees = degrees,
),
myjson = [] # myjson = list of dictionaries where each dictionary
with(open("ieca_first_col_fake_text.txt", "rU")) as f:
sheet = csv.DictReader(f,delimiter="\t")
for row in sheet:
myjson.append(row)
for i in range(4):
s = myjson[i]
a = parse_ieca_gc(s)
pprint(a)
example data (made up data):
name phone email website
Diane Grant Albrecht M.S.
"Lannister G. Cersei M.A.T., CEP" 111-222-3333 cersei#got.com www.got.com
Argle D. Bargle Ed.M.
Sam D. Man Ed.M. 000-000-1111 dman123#gmail.com www.daManWithThePlan.com
D G Bamf M.S.
Amy Tramy Lamy Ph.D.
Download ex data
Last login: Tue Jul 2 15:33:31 on ttys000
/var/folders/jv/9_sy0bn10mbdft1bk9t14qz40000gn/T/Cleanup\ At\ Startup/ieca_first_col-394486416.142.py.command ; exit;
Samuel-Finegolds-MacBook-Pro:~ samuelfinegold$ /var/folders/jv/9_sy0bn10mbdft1bk9t14qz40000gn/T/Cleanup\ At\ Startup/ieca_first_col-394486416.142.py.command ; exit;
range 4
split area nmdeg
CEP
['Lannister G. Cersei M.A.T.']
({'additionaltext': '',
'bio': '',
'category': ' CEP',
'certifications': [],
'company': '',
'counselingoptions': [],
'counselingtype': [],
'datasource': {'additionaltext': '',
'linktext': '',
'linkurl': '',
'logourl': ''},
'degrees': ['M.A.T.'],
'description': '',
'email': {'emailtype': [], 'value': 'cersei#got.com'},
'facebook': '',
'languages': 'english',
'linkedin': '',
'linktext': '',
'linkurl': '',
'location': {'address': '',
'city': '',
'country': 'united states',
'geo': {'lat': '', 'lng': ''},
'loc_name': '',
'locationtype': '',
'state': '',
'zip': ''},
'logourl': '',
'name': {'first_name': u'Lannister',
'last_name': u'Cersei',
'middle_name': u'G.',
'title': u''},
'phone': {'phonetype': [], 'value': '1112223333'},
'photo': '',
'price': {'costrange': [], 'costtype': []},
'twitter': '',
'website': ''},)
({'additionaltext': '',
'bio': '',
'category': [],
'certifications': [],
'company': '',
'counselingoptions': [],
'counselingtype': [],
'datasource': {'additionaltext': '',
'linktext': '',
'linkurl': '',
'logourl': ''},
'degrees': ['Ed.M.'],
'description': '',
'email': {'emailtype': [], 'value': ''},
'facebook': '',
'languages': 'english',
'linkedin': '',
'linktext': '',
'linkurl': '',
'location': {'address': '',
'city': '',
'country': 'united states',
'geo': {'lat': '', 'lng': ''},
'loc_name': '',
'locationtype': '',
'state': '',
'zip': ''},
'logourl': '',
'name': {'first_name': u'Argle',
'last_name': u'Bargle',
'middle_name': u'D.',
'title': u''},
'phone': {'phonetype': [], 'value': ''},
'photo': '',
'price': {'costrange': [], 'costtype': []},
'twitter': '',
'website': ''},)
({'additionaltext': '',
'bio': '',
'category': [],
'certifications': [],
'company': '',
'counselingoptions': [],
'counselingtype': [],
'datasource': {'additionaltext': '',
'linktext': '',
'linkurl': '',
'logourl': ''},
'degrees': ['Ed.M.'],
'description': '',
'email': {'emailtype': [], 'value': 'dman123#gmail.com'},
'facebook': '',
'languages': 'english',
'linkedin': '',
'linktext': '',
'linkurl': '',
'location': {'address': '',
'city': '',
'country': 'united states',
'geo': {'lat': '', 'lng': ''},
'loc_name': '',
'locationtype': '',
'state': '',
'zip': ''},
'logourl': '',
'name': {'first_name': u'Sam',
'last_name': u'Man',
'middle_name': u'D.',
'title': u''},
'phone': {'phonetype': [], 'value': '0000001111'},
'photo': '',
'price': {'costrange': [], 'costtype': []},
'twitter': '',
'website': ''},)
({'additionaltext': '',
'bio': '',
'category': [],
'certifications': [],
'company': '',
'counselingoptions': [],
'counselingtype': [],
'datasource': {'additionaltext': '',
'linktext': '',
'linkurl': '',
'logourl': ''},
'degrees': ['M.S.'],
'description': '',
'email': {'emailtype': [], 'value': ''},
'facebook': '',
'languages': 'english',
'linkedin': '',
'linktext': '',
'linkurl': '',
'location': {'address': '',
'city': '',
'country': 'united states',
'geo': {'lat': '', 'lng': ''},
'loc_name': '',
'locationtype': '',
'state': '',
'zip': ''},
'logourl': '',
'name': {'first_name': u'D',
'last_name': u'Bamf',
'middle_name': u'G',
'title': u''},
'phone': {'phonetype': [], 'value': ''},
'photo': '',
'price': {'costrange': [], 'costtype': []},
'twitter': '',
'website': ''},)
logout
[Process completed]
You are using a local variable name here:
name = HumanName(name)
You do set name before that point, but only if certain conditions match. When those conditions do not match, name is never assigned to and the exception is thrown.
For example, in the first if branch, the loop is:
for word in split_name_deg:
for deg in degrees:
if deg == word:
degrees_list.append(split_name_deg.pop())
name = ' '.join(split_name_deg)
If deg == word never matches, then name is never set either.
Your function also doesn't return anything, so the line a = parse_ieca_gc(s) will only ever assign None to a. You need to use the return keyword to set a return value for your function.
Last but not least, you only pass the first row from your CSV file to the function, and that first row has no website associated with it:
Diane Grant Albrecht M.S.
I thought leaving a comment, but I guess that should actually qualify as an answer. You seem to like programming (or at least be serious about it), so please take my answer positively: not as another piece of criticisms but as an advice how to avoid similar errors/problems in the future.
This are just a few points that I came up with after reading your code:
1) Add entry points
Code is messy which makes it difficult to find and follow the main line of your thinking (the program logic). Since you are not just prototyping or experimenting, but writing a functioning program you should really add an entry point. In python one first defines his module with all entries and elements (mainly imports, constants and functions), and only then sets the entry point with the section: if __name__ == '__main__': at the bottom of the module.
2) Break code into many functions
The program is not that big, but because you are trying to do too much (very quick-n-dirty) and using just few lines it becomes dangerous. Your code growth organically very fast and exposing it errors like this. Please take your time and learn how to break your code into functions, which are the basic building blocks for each module. Try to define many small self-consistent functions in your module and call them from the main part of the program. If you manage to give them proper names - your code will be very readable, especially starting with __main__ part.
Treat each function as a small program (divide and conquer). Keep each function small in number of lines (<= 20) and compact in number of arguments (<= 5-7). It has many advantages:
you can test the code of each function individually and be sure that it is working by using calls form __main__, doctests or unittests. Like this, you will always have full control of your program even before/without applying sophisticated debugging techniques
each function is a closure and declared variables a bounded to a local scope. This helps to avoid complications with global variables that may lead to "side-effects".
functions can be imported by other modules. Like this you have a better organization of the code and greater re-usability.
3) Never write too much code without running it
Progressing slowly allows to keep constant overview of your idea while writing the program. Even if the code will end up uglier than you would wish it to be, any incremental change to your code should be traceable (you kind-of know/observe how much code is added with each step). You can also start using version control even locally (just for yourself), that will allow you to progress slowly by keeping your commits atomic and self-contained.
4) Print and die
If you still feel like you give to far or written too much code w/o running it, your end up in a situation similar to yours now. Another trick could be just to put exit() call in the middle or before of the newly written code that breaks (by checking line number of the exception info). In most cases, trying to print out variables and check if their values are similar to expected helps to find the problem. Otherwise just comment a section of you program in order to make a few steps back (cutting it until it gets that small that whatever is "on" - works)
5) Cyclic complexity
Avoid too many nested loops and conditional constructions. Try to do not more than 2-3 nested blocks per function. It is important matter. Use tools like pylint and PEP8 to check the quality of your code. You will be surprised how many complains those tools are capable to find about the code that looks decent. E.g. there is a lot of motivation for having 80 chars limit per line of your code. That really does prevent writing too much hanging and nested code. Ideally code is always compact: each function is not too wide, and not too tall.
6) Avoid
Finally, try to avoid
reassigning variables to itself, especially if your are changing the type in addition: name = HumanName(name)
defining "hanging" variables inside nested blocks and expressions which may or may not take place
dependency on global variables (exactly by defining functions!) and too much cross dependencies. Unless you program a recursive algorithm, you should be totally fine with top down approach. Dependencies on badly tested, uncertain outcomes.
respect indent - always replace tabs! you would not make it far in python if you don't do it (set ts=4 | set sw=4 | set et)
If you write a line of code that takes you too long to think afterwards, consider correcting it. If you write a function that you later don't understand, consider throwing it away. If you do everything right and
you get an error that you don't understand, consider going to sleep.
PS
Don't forget to smoke the famous
>>> import this
Hope some of the points are useful.
GL!