Django shell command to change a value in json data - python

I am a django newbie and i was playing around in django's manage.py shell. Here is something i am trying in the shell:
>>> data
[{'primary_program': False, 'id': 3684}, {'primary_program': True, 'id': 3685}]
>>> data[0]
{'primary_program': False, 'id': 3684}
>>> data[1]
{'primary_program': True, 'id': 3685}
>>> data[0].values()
[False, 3684]
>>> data[1].values()
[True, 3685]
>>>
How should i give a command here to update the value of primary_program in data[1] to False and keep the rest of the json the same?
EDIT:
This is how i am getting my data. I have a rest framework api and i am using a serializer to read data.
>>> from acadprog.models import *
>>> from acadprog.serializers import *
>>> from django.http import Http404
>>> from rest_framework import status
>>> from rest_framework.views import APIView
>>> from rest_framework.decorators import api_view
>>> from rest_framework.response import Response
>>> qs = Student_academic_program.objects.filter(student=2773951)
>>> qs.values()
[{'academic_program_id': 595, 'academic_program_gpa': None, 'student_id': 2773951, 'credits_completed': 28, 'primary_program': False, u'id': 3684}, {'academic_program_id': 596, 'academic_program_gpa': None, 'student_id': 2773951, 'credits_completed': 26, 'primary_program': True, u'id': 3685}]
>>> len(qs.values())
2
>>> data = qs.values('id','primary_program')
>>> data
[{'primary_program': False, 'id': 3684}, {'primary_program': True, 'id': 3685}]
>>> data
[{'primary_program': False, 'id': 3684}, {'primary_program': True, 'id': 3685}]
>>> data[1]['primary_program'] = False
>>> data
[{'primary_program': False, 'id': 3684}, {'primary_program': True, 'id': 3685}]
>>> data['primary_program'][1] = False
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/abhishek/projects/texascompletes/local/lib/python2.7/site-packages/django/db/models/query.py", line 108, in __getitem__
raise TypeError
TypeError
>>> data[1]['primary_program'] = False
>>> data
[{'primary_program': False, 'id': 3684}, {'primary_program': True, 'id': 3685}]

Since data[1] is just a dictionary, set the value by key:
>>> data = [{'primary_program': False, 'id': 3684}, {'primary_program': True, 'id': 3685}]
>>> data[1]['primary_program'] = False
>>> data
[{'primary_program': False, 'id': 3684}, {'primary_program': False, 'id': 3685}]
UPD:
The thing you are getting in the data is a queryset. If you need an updated list of dictionaries, cast queryset to list before:
>>> data = list(qs.values('id','primary_program'))
>>> data[1]['primary_program'] = False
If you want to update the value in the database:
>>> qs = Student_academic_program.objects.filter(student=2773951)
>>> data = qs[1]
>>> data.primary_program = False
>>> data.save()
Also, if you want to set primary_program to False for all of the student's programs, you can use update() for bulk update:
>>> Student_academic_program.objects.filter(student=2773951).update(primary_program=False)

Related

Python Download Roblox Data

I'm attempting to format json from the Roblox API. I have a names.txt that stores all of the names. This is how the file looks
rip_robson0007
Abobausrip
app_58230
kakoytochelik123
Ameliathebest727
Sherri0708
HixPlayk
mekayla_091
ddddorffg
ghfgrgt7nfdbfj
TheWolfylikedog
paquita12345jeje
hfsgfhsgfhgfhds
It stores a bunch of usernames seperated by a new line. The code is suppose to use the names and for each name get the JSON from this endpoint https://api.roblox.com/users/get-by-username?username={name} & format it as I have in my code. It always returns error 429 and doesn't save any of the data.
This is the code:
import json
import requests
import time
# Read the names from the text file
with open("./txt/names.txt", "r") as f:
names = f.read().split("\n")
# Initialize an empty dictionary to store the users
users = {}
# Iterate through the names
for name in names:
time.sleep(5)
response = requests.get(f"https://api.roblox.com/users/get-by-username?username={name}")
# Check the status code of the response
if response.status_code != 200:
print(f"Failed to get data for {name}: {response.status_code}")
continue
# Try to parse the response as JSON
try:
user_data = response.json()
except ValueError:
print(f"Failed to parse JSON for {name}")
continue
# Extract the necessary information from the response
user_id = user_data["Id"]
username = user_data["Username"]
avatar_uri = user_data["AvatarUri"]
avatar_final = user_data["AvatarFinal"]
is_online = user_data["IsOnline"]
# Add the user's information to the dictionary
users[user_id] = {
"Id": user_id,
"Username": username,
"AvatarUri": avatar_uri,
"AvatarFinal": avatar_final,
"IsOnline": is_online
}
# Save the dictionary to a JSON file
with open("users.json", "w") as f:
json.dump(users, f)
You can often overcome this with judicious use of proxies.
Start by getting a list of proxies from which you will make random selections. I have a scraper that acquires proxies from https://free-proxy-list.net
My list (extract) looks like this:-
http://103.197.71.7:80 - no
http://163.116.177.33:808 - yes
'yes' means that HTTPS is supported. This list currently contains 95 proxies. It varies depending on the response from my scraper
So we start by parsing the proxy list. Subsequently we choose proxies at random before trying to access the Roblox API. This may not run quickly because the proxies are not necessarily reliable. They are free after all.
from requests import get as GET, packages as PACKAGES
from random import choice as CHOICE
from concurrent.futures import ThreadPoolExecutor as TPE
PACKAGES.urllib3.util.ssl_.DEFAULT_CIPHERS = 'ALL:#SECLEVEL=1'
ROBLOX_API = 'https://api.roblox.com/users/get-by-username'
TIMEOUT = 1
def get_proxies():
http, https = list(), list()
with open('proxylist.txt') as p:
for line in p:
proxy_url, _, supports_https = line.split()
_list = https if supports_https == 'yes' else http
_list.append(proxy_url)
return http, https
http, https = get_proxies()
def process(name):
params = {'username': name.strip()}
while True:
try:
proxy = {'http': CHOICE(http), 'https': CHOICE(https)}
(r := GET(ROBLOX_API, params=params, proxies=proxy, timeout=TIMEOUT)).raise_for_status()
if (j := r.json()).get('success', True):
print(j)
break
except Exception as e:
pass
with open('names.txt') as names:
with TPE() as executor:
executor.map(process, names)
In principle, the while loop in process() could get stuck so it might make sense to limit the number of retries.
This produces the following output:
{'Id': 4082578648, 'Username': 'paquita12345jeje', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 2965702542, 'Username': 'mekayla_091', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 4079018794, 'Username': 'app_58230', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 3437922948, 'Username': 'kakoytochelik123', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 4082346906, 'Username': 'Abobausrip', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 2988555289, 'Username': 'HixPlayk', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 3286921649, 'Username': 'Sherri0708', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 1441252794, 'Username': 'ghfgrgt7nfdbfj', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 4088896225, 'Username': 'ddddorffg', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 3443374919, 'Username': 'TheWolfylikedog', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 3980932331, 'Username': 'Ameliathebest727', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 3773237135, 'Username': 'rip_robson0007', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}
{'Id': 4082991447, 'Username': 'hfsgfhsgfhgfhds', 'AvatarUri': None, 'AvatarFinal': False, 'IsOnline': False}

Can't fetch an item out of weird json content

I'm trying to get some items from json content. However, the structure of that json content is foreign to me and as a result I can't fetch the value of property out of it.
I've tried so far with:
import json
import requests
from bs4 import BeautifulSoup
link = 'https://www.zillow.com/homedetails/5958-SW-4th-St-Miami-FL-33144/43835884_zpid/'
def fetch_content(link):
content = requests.get(link,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(content.text,"lxml")
item = soup.select_one("script#hdpApolloPreloadedData").text
print(json.loads(item)['apiCache'])
if __name__ == '__main__':
fetch_content(link)
The result I get running the above script is:
{"VariantQuery{\"zpid\":43835884}":{"property":{"zpid":43835884,"streetAddress":"5958 SW 4th St",
Which I can't further process for that weird key in front.
Expected output:
{"zpid":43835884,"streetAddress":"5958 SW 4th St", ----
How can I get the value of that property?
You can get zpid and address by their mangled json with:
json.loads(json.loads(item.text)['apiCache'])['VariantQuery{"zpid":43835884}']['property']['zpid']
Out[1889]: 43835884
json.loads(json.loads(item.text)['apiCache'])['VariantQuery{"zpid":43835884}']['property']['streetAddress']
Out[1890]: '5958 SW 4th St'
I noticed you can always get the zpid like this:
link = 'https://www.zillow.com/homedetails/5958-SW-4th-St-Miami-FL-33144/43835884_zpid/'
content = requests.get(link,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(content.text,"lxml")
item = soup.select_one("script#hdpApolloPreloadedData").text
print(json.loads(item)['zpid'])
Just modify your function to the following. I also added another function (process_fetched_content()) to give you some more freedom. You could simply run it and it will take care of situations even when you have multiple keys that start with 'VariantQuery{"zpid":'. The final output is a dict with the keys being your zpid and the values being what you are looking for.
If you have a lot of zpid values, then this will let you accumulate them all together and then process them. The benefit is the list of keys is then the list of zpids you have.
Here's how you could use this code.
results = process_fetched_content(raw_dictionary = fetch_content(link, verbose=False))
print(results)
output:
{'43835884': {'zpid': 43835884, 'streetAddress': '5958 SW 4th St', 'zipcode': '33144', 'city': 'Miami', 'state': 'FL', 'latitude': 25.76661, 'longitude': -80.292801, 'price': 340000, 'dateSold': 1576875600000, 'bathrooms': 2, 'bedrooms': 3, 'livingArea': 1757, 'yearBuilt': 1973, 'lotSize': 4331, 'homeType': 'SINGLE_FAMILY', 'homeStatus': 'RECENTLY_SOLD', 'photoCount': 19, 'imageLink': 'https://photos.zillowstatic.com/p_g/IS7yxihwtuqmlq1000000000.jpg', 'daysOnZillow': 0, 'isFeatured': False, 'shouldHighlight': False, 'brokerId': 0, 'zestimate': 341336, 'rentZestimate': 2200, 'listing_sub_type': {}, 'priceReduction': '', 'isUnmappable': False, 'rentalPetsFlags': 128, 'mediumImageLink': 'https://photos.zillowstatic.com/p_c/IS7yxihwtuqmlq1000000000.jpg', 'isPreforeclosureAuction': False, 'homeStatusForHDP': 'RECENTLY_SOLD', 'priceForHDP': 340000, 'festimate': 341336, 'isListingOwnedByCurrentSignedInAgent': False, 'isListingClaimedByCurrentSignedInUser': False, 'hiResImageLink': 'https://photos.zillowstatic.com/p_f/IS7yxihwtuqmlq1000000000.jpg', 'watchImageLink': 'https://photos.zillowstatic.com/p_j/IS7yxihwtuqmlq1000000000.jpg', 'tvImageLink': 'https://photos.zillowstatic.com/p_m/IS7yxihwtuqmlq1000000000.jpg', 'tvCollectionImageLink': 'https://photos.zillowstatic.com/p_l/IS7yxihwtuqmlq1000000000.jpg', 'tvHighResImageLink': 'https://photos.zillowstatic.com/p_n/IS7yxihwtuqmlq1000000000.jpg', 'zillowHasRightsToImages': True, 'desktopWebHdpImageLink': 'https://photos.zillowstatic.com/p_h/IS7yxihwtuqmlq1000000000.jpg', 'isNonOwnerOccupied': False, 'hideZestimate': False, 'isPremierBuilder': False, 'isZillowOwned': False, 'currency': 'USD', 'country': 'USA', 'taxAssessedValue': 224131, 'streetAddressOnly': '5958 SW 4th St', 'unit': ' '}}
Code
import json
import requests
from bs4 import BeautifulSoup
link = 'https://www.zillow.com/homedetails/5958-SW-4th-St-Miami-FL-33144/43835884_zpid/'
def fetch_content(link, verbose=False):
content = requests.get(link,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(content.text,"lxml")
item = soup.select_one("script#hdpApolloPreloadedData").text
d = json.loads(item)['apiCache']
d = json.loads(d)
if verbose:
print(d)
return d
def process_fetched_content(raw_dictionary=None):
if raw_dictionary is not None:
keys = [k for k in raw_dictionary.keys() if k.startswith('VariantQuery{"zpid":')]
results = dict((k.split(':')[-1].replace('}',''), d.get(k).get('property', None)) for k in keys)
return results
else:
return None

Scraping a list with scrapy and structure it

I'm trying to scrape every title and score from that page https://myanimelist.net/animelist/MoonlessMidnite?status=7 and return data in that form :
{"user" : moonlessmidnite, "anime" : A, "score" : x
"user" : moonlessmidnite, "anime" : B, "score" : x
"user" : moonlessmidnite, "anime" : C, "score" : x }
...ect
I managed to get table
table = response.xpath('.//tr[#class = "list-table-data"]')
score = table.xpath('.//td[#class = "data score"]//a/text()').extract()
title = table.xpath('.//td//a[#class = "link sort"]').extract()
but when i'm trying to scrape title or score i got some weird ouput like :
['\n ', '\n ', '${ item.anime_title }']
Look at the raw HTML of the website:
You see that it indeed contains ${ item.anime_title }.
That indicates that the content is generated via Javascript.
There's no easy solution for that, you'll have to look at the XHR requests that are being done and see if you can get something meaningful.
If you look closely at the HTML, you will see that the data is contained in a big JSON string in the table data-item attrbute.
Try this in the scrapy shell:
fetch('https://myanimelist.net/animelist/MoonlessMidnite?status=7')
import json
json.loads(response.xpath('//table[#class="list-table"]/#data-items').extract_first()
This outputs something like this:
{'status': 2,
'score': 0,
'tags': '',
'is_rewatching': 0,
'num_watched_episodes': 1,
'anime_title': 'Hidan no Aria Special',
'anime_num_episodes': 1,
'anime_airing_status': 2,
'anime_id': 10604,
'anime_studios': None,
'anime_licensors': None,
'anime_season': None,
'has_episode_video': False,
'has_promotion_video': True,
'has_video': True,
'video_url': '/anime/10604/Hidan_no_Aria_Special/video',
'anime_url': '/anime/10604/Hidan_no_Aria_Special',
'anime_image_path': 'https://cdn.myanimelist.net/r/96x136/images/anime/2/29138.jpg?s=90cb8381c58c92d39862ac700c43f7b5',
'is_added_to_list': False,
'anime_media_type_string': 'Special',
'anime_mpaa_rating_string': 'PG-13',
'start_date_string': None,
'finish_date_string': None,
'anime_start_date_string': '12-21-11',
'anime_end_date_string': '12-21-11',
'days_string': None,
'storage_string': '',
'priority_string': 'Low'},
{'status': 6,
'score': 0,
'tags': '',
'is_rewatching': 0,
'num_watched_episodes': 0,
'anime_title': '.hack//Roots',
'anime_num_episodes': 26,
'anime_airing_status': 2,
'anime_id': 873,
'anime_studios': None,
'anime_licensors': None,
'anime_season': None,
'has_episode_video': False,
'has_promotion_video': True,
'has_video': True,
'video_url': '/anime/873/hack__Roots/video',
'anime_url': '/anime/873/hack__Roots',
'anime_image_path': 'https://cdn.myanimelist.net/r/96x136/images/anime/3/13050.jpg?s=db9ff70bf19742172f1d0140c95c4a65',
'is_added_to_list': False,
'anime_media_type_string': 'TV',
'anime_mpaa_rating_string': 'PG-13',
'start_date_string': None,
'finish_date_string': None,
'anime_start_date_string': '04-06-06',
'anime_end_date_string': '09-28-06',
'days_string': None,
'storage_string': '',
'priority_string': 'Low'}
You then just have to use this dict to get the info that you need.

recursively collect string blocks in python

I have a custom data file formatted like this:
{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}
I want to collect the blocks of data, meaning string between each set of {} while maintaining a hierarhcy. This data is not a typical json format so that is not a possible solution.
My idea was to create a class object like so
class Block:
def __init__(self, header, children):
self.header = header
self.children = children
Where i would then loop through the data line by line 'somehow' collecting the necessary data so my resulting output would like something like this...
Block("data = {}", [
Block("friends = {max = 0 0,\n min = 0 0,}", []),
Block("family = {version = 1}", [...])
])
In short I'm looking for help on ways I can serialize this into useful data I can then easily manipulate. So my approach is to break into objects by using the {} as dividers.
If anyone has suggestions on ways to better approach this I'm all up for ideas. Thank you again.
So far I've just implemented the basic snippets of code
class Block:
def __init__(self, content, children):
self.content = content
self.children = children
def GetBlock(strArr=[]):
print len(strArr)
# blocks = []
blockStart = "{"
blockEnd = "}"
with open(filepath, 'r') as file:
data = file.readlines()
blocks = GetBlock(strArr=data)
You can create a to_block function that takes the lines from your file as an iterator and recursively creates a nested dictionary from those. (Of course you could also use a custom Block class, but I don't really see the benefit in doing so.)
def to_block(lines):
block = {}
for line in lines:
if line.strip().endswith(("}", "},")):
break
key, value = map(str.strip, line.split(" = "))
if value.endswith("{"):
value = to_block(lines)
block[key] = value
return block
When calling it, you have to strip the first line, though. Also, evaluating the "leafs" to e.g. numbers or strings is left as an excercise to the reader.
>>> to_block(iter(data.splitlines()[1:]))
{'data': {'family': {'version': '1,',
'cars': {'bike': '"trek",', 'car': '"ford",', 'van': '"honda",'},
'presets': {'travelers': 'False,', 'size': '10,', 'location': '"italy",'}},
'friends': {'max': '0 0,', 'min': '0 0,'}}}
Or when reading from a file:
with open("data.txt") as f:
next(f) # skip first line
res = to_block(f)
Alternatively, you can do some preprocessing to transform that string into a JSON(-ish) string and then use json.loads. However, I would not go all the way here but instead just wrap the values into "" (and replace the original " with ' before that), otherwise there is too much risk to accidentally turning a string with spaces into a list or similar. You can sort those out once you've created the JSON data.
>>> data = data.replace('"', "'")
>>> data = re.sub(r'= (.+),$', r'= "\1",', data, flags=re.M)
>>> data = re.sub(r'^\s*(\w+) = ', r'"\1": ', data, flags=re.M)
>>> data = re.sub(r',$\s*}', r'}', data, flags=re.M)
>>> json.loads(data)
{'data': {'family': {'version': '1',
'presets': {'size': '10', 'travelers': 'False', 'location': "'italy'"},
'cars': {'bike': "'trek'", 'van': "'honda'", 'car': "'ford'"}},
'friends': {'max': '0 0', 'min': '0 0'}}}
You can also do with ast or json with the help of regex substitutions.
import re
a = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}"""
#with ast
a = re.sub("(\w+)\s*=\s*", '"\\1":', a)
a = re.sub(":\s*((?:\d+)(?: \d+)+)", lambda x:':[' + x.group(1).replace(" ", ",") + "]", a)
import ast
print ast.literal_eval(a)
#{'data': {'friends': {'max': [0, 0], 'min': [0, 0]}, 'family': {'cars': {'car': 'ford', 'bike': 'trek', 'van': 'honda'}, 'presets': {'travelers': False, 'location': 'italy', 'size': 10}, 'version': 1}}}
#with json
import json
a = re.sub(",(\s*\})", "\\1", a)
a = a.replace(":True", ":true").replace(":False", ":false").replace(":None", ":null")
print json.loads(a)
#{u'data': {u'friends': {u'max': [0, 0], u'min': [0, 0]}, u'family': {u'cars': {u'car': u'ford', u'bike': u'trek', u'van': u'honda'}, u'presets': {u'travelers': False, u'location': u'italy', u'size': 10}, u'version': 1}}}

Getting error with simple select statement on web2py's 'image blog'

For my project, I need to connect to multiple databases and get information from them. I didn't think this would be a problem with web2py, but it was. I thought maybe I need to rebuild the db from scratch, but still had problems. Finally, I went through the introductory 'images' tutorial and changed it to use an alternate mysql database. I still got the same errors, below is the code:
db.py
db = DAL("mysql://root:#localhost/web2py")
images_db = DAL("mysql://root:#localhost/images_test")
images_db.define_table('image',
Field('title', unique=True),
Field('file', 'upload'),
format = '%(title)s')
images_db.define_table('comment',
Field('image_id', images_db.image),
Field('author'),
Field('email'),
Field('body', 'text'))
Then I went to the admin page for 'images' and clicked the 'shell' link under 'controllers' and did the following: (after I went to the index page to generate the 'images':
Shell:
In [1] : print db(images_db.image).select()
Traceback (most recent call last):
File "/home/cody/Downloads/web2py/gluon/contrib/shell.py", line 233, in run
exec compiled in statement_module.__dict__
File "<string>", line 1, in <module>
File "/home/cody/Downloads/web2py/gluon/dal.py", line 7577, in select
fields = adapter.expand_all(fields, adapter.tables(self.query))
File "/home/cody/Downloads/web2py/gluon/dal.py", line 1172, in expand_all
for field in self.db[table]:
File "/home/cody/Downloads/web2py/gluon/dal.py", line 6337, in __getitem__
return dict.__getitem__(self, str(key))
KeyError: 'image'
In [2] : print images_db.has_key('image')
True
In [3] : print images_db
<DAL {'_migrate_enabled': True, '_lastsql': "SET sql_mode='NO_BACKSLASH_ESCAPES';", '_db_codec': 'UTF-8', '_timings': [('SET FOREIGN_KEY_CHECKS=1;', 0.00017380714416503906), ("SET sql_mode='NO_BACKSLASH_ESCAPES';", 0.00016808509826660156)], '_fake_migrate': False, '_dbname': 'mysql', '_request_tenant': 'request_tenant', '_adapter': <gluon.dal.MySQLAdapter object at 0x2b84750>, '_tables': ['image', 'comment'], '_pending_references': {}, '_fake_migrate_all': False, 'check_reserved': None, '_uri': 'mysql://root:#localhost/images_test', 'comment': <Table {'body': <gluon.dal.Field object at 0x2b844d0>, 'ALL': <gluon.dal.SQLALL object at 0x2b84090>, '_fields': ['id', 'image_id', 'author', 'email', 'body'], '_sequence_name': 'comment_sequence', '_plural': 'Comments', 'author': <gluon.dal.Field object at 0x2b84e10>, '_referenced_by': [], '_format': None, '_db': <DAL {...}>, '_dbt': 'applications/images/databases/e1e448013737cddc822e303fe20f8bec_comment.table', 'email': <gluon.dal.Field object at 0x2b84490>, '_trigger_name': 'comment_sequence', 'image_id': <gluon.dal.Field object at 0x2b84050>, '_actual': True, '_singular': 'Comment', '_tablename': 'comment', '_common_filter': None, 'virtualfields': [], '_id': <gluon.dal.Field object at 0x2b84110>, 'id': <gluon.dal.Field object at 0x2b84110>, '_loggername': 'applications/images/databases/sql.log'}>, 'image': <Table {'ALL': <gluon.dal.SQLALL object at 0x2b84850>, '_fields': ['id', 'title', 'file'], '_sequence_name': 'image_sequence', 'file': <gluon.dal.Field object at 0x2b847d0>, '_plural': 'Images', 'title': <gluon.dal.Field object at 0x2b84610>, '_referenced_by': [('comment', 'image_id')], '_format': '%(title)s', '_db': <DAL {...}>, '_dbt': 'applications/images/databases/e1e448013737cddc822e303fe20f8bec_image.table', '_trigger_name': 'image_sequence', '_loggername': 'applications/images/databases/sql.log', '_actual': True, '_tablename': 'image', '_common_filter': None, 'virtualfields': [], '_id': <gluon.dal.Field object at 0x2b848d0>, 'id': <gluon.dal.Field object at 0x2b848d0>, '_singular': 'Image'}>, '_referee_name': '%(table)s', '_migrate': True, '_pool_size': 0, '_common_fields': [], '_uri_hash': 'e1e448013737cddc822e303fe20f8bec'}>
Now I don't quite understand why I am getting errors here, everything appears to be in order. I thought web2py supported multiple databases? Am I doing it wrong? The appadmin works fine, perhaps I'll edit it and get it to raise an error with the code it's generating... any help would be appreciated.
Cody
UPDATE:
I just tried this:
MODELS/DB.PY
db = DAL("mysql://root:#localhost/web2py")
images_db = DAL("mysql://root:#localhost/images_test")
images_db.define_table('image',
Field('title', unique=True),
Field('file', 'upload'),
format = '%(title)s')
images_db.define_table('comment',
Field('image_id', images_db.image),
Field('author'),
Field('email'),
Field('body', 'text'))
CONTROLLERS/DEFAULT.PY
def index():
"""
example action using the internationalization operator T and flash
rendered by views/default/index.html or views/generic.html
"""
if images_db.has_key('image'):
rows = db(images_db.image).select()
else:
rows = 'nope'
#rows = dir(images_db)
return dict(rows=rows)
VIEWS/DEFAULT/INDEX.HTML
{{left_sidebar_enabled,right_sidebar_enabled=False,True}}
{{extend 'layout.html'}}
these are the rows:
{{=rows }}
Again, very confused by all of this. Appreciate any help.
If you want to query the images_db connection, you have to call images_db(), not db(). So, it would be:
images_db(images_db.image).select()

Categories

Resources