Where does 'default' come from? - python

i've just started using TinyDB as a way to store my data into an JSON file and this makes it easy for my to search for any content in my file. So, i've copied and pasted a code from https://pypi.python.org/pypi/tinydb and changed the names accordingly to fit this project i am doing. However, i do not understand where this 'default' and '1' comes from.
Also, the codes provided to create a table are all done in command line and none written in python3 so does anyone know which websites provide help on creating tables using TinyDB in Python 3? I've searched everywhere.
Can someone enlighten me please.
from tinydb import TinyDB, Query
db = TinyDB('/home/pi/Desktop/csv/smartkey1.json')
table = db.table('pillar')
table.insert({'active': True})
table.all()
[{'active': True}]
Output:
{"_default": {}, "pillar": {"1": {"active": true}}}

The _default is showing you content of deafult table. In your case it is empty - {}.
In the case of pillar table, number 1 is unique identifier - Element ID.
Not sure if I understood your last question correctly, but instead of "inputting lines in command line", save those lines in a file with .py extension and run it with python filename.py from your command line.

Related

How to create a code for checking if entries are new, updated or deleted in a csv file using Python script?

Code I triedI have a csv file and I want to add another column to my file where I can run a script and check if the entries are new, or updated or deleted? The csv file has words, so it will be a string comparison. I want the script to be case-sensitive.
I am new to coding and I need help with the string that can do so.
Thank you!
I tried the following code to just test it out , but I am getting errors.

Save KDB+/q table on a Mac

I'm new to q and I'm trying to save a file on my Mac. Currently using Jupyter Notebook if that makes a difference.
A quick table:
t:([] c1:`a`b`c; c2:1.1 2.2 3.3)
I first extract my current location by using \cd and i get: "/Users/Gorlomi/Documents/q"
but when I try
`:/Users/Gorlomi/Documents/q set t
I get:
evaluation error:
type
[1] (.q.set)
[0] `:/Users/Gorlomi/Documents/q set t
^
I'm following examples from "Q for Mortals" from the kx website:
https://code.kx.com/q4m3/1_Q_Shock_and_Awe/#11-starting-q
For easy find use cmd (or ctrl) + F and find "t set t"
Thank you in advance.
There are two answers to this question, depending on whether you want to save your file as a flat table, or a splayed table.
If you want to save you table as a flat table, you need to give a file name for your table. Currently, you're just giving it the directory that you want to save it in. So for instance, the following should work for you:
`:/Users/Gorlomi/Documents/q/t set t
If instead, you want to save your table as a splayed table, then you will need to pass it a directory (ideally, one that is not already being used by the file system). To do this, you will pass set a file path with a trailing forward slash. So the following should work for you:
`:/Users/Gorlomi/Documents/q/t/ set t

PostgreSQL COPY of a CSV file breaking before completion. Works with psql \copy but not COPY

I'm currently working with what I think to be a pretty large file that I need to ingest into a postgreSQL database. This is a few example rows of the .csv reads that the COPY command uses to insert into the database. The structure of the rows is that sample_id and otu_id are two foreign keys which refer to primary keys in a sample table and an otu table.
sample_id,otu_id,count
163,2901,0.0
164,2901,0.0
165,2901,0.0
Which is ingested into the table with the following code using SQLAlchemy:
self._engine.execute(
text('''COPY otu.sample_otu from :csv CSV header''').execution_options(autocommit=True),
csv=fname)
After copying to the table from the .csv file, I query the database for the samples shown and it gives me a result for the sample_id=163, otu_id=2901 but it doesn't give me the rows after that. If I'm correct, the COPY command stops copying after the first error it encounters so my guess is that there's a problem with the sample of id 164 and otu id of 2901.
I've tried the following:
There is a valid entry in the otu table for 2901, likewise for the
sample table id 164 so I don't think it's a missing key error.
I have also searched the file for duplicate foreign key combinations and I can't seem to find any.
I've tried to only write every second entry into the .csv file that is copied from incase it was something to do with how large the .csv file was but it ended up giving me the same issue but cutting off at the different point. When only copying entries with even otu_ids, the subsequent table query results for otu_ids > 2890 breaks at sample id 152, otu id 2900.
I tried using psql's \copy command to manually copy from the .csv file:
\copy sample_otu FROM 'bpaotu-ijpgihw6' WITH DELIMITER ',' CSV HEADER;
This seems to work perfectly fine. The query shows otu_ids past the otu id 2901.
I'm just very confused as to why it breaks there as the .csv rows before and afterward look identical and there are entries as the corresponding primary key values in the foreign tables which it uses.
For anyone that comes across this:
The problem was a simple scope error. I missed an indent on the file using block in the python database importing script so it was attempting to COPY from the .csv file before it was finished writing the file. So the copy statement wouldn't throw errors as it assumed the file was finished when it was actually still being written to.
I moved the indentation so the COPY is run after the file writing block has closed and it now works.
It also explains why the psql \copy command worked and the scripted COPY didn't work as the manual copy was after the file was already created but failed to fully import.

Python JSON dictionary key error

I'm trying to collect data from a JSON file using python. I was able to access several chunks of text but when I get to the 3rd object in the JSON file I'm getting a key error. The first three lines work fine but the last line gives me a key error.
response = urllib.urlopen("http://asn.desire2learn.com/resources/D2740436.json")
data = json.loads(response.read())
title = data["http://asn.desire2learn.com/resources/D2740436"]["http://purl.org/dc/elements/1.1/title"][0]["value"]
description = data["http://asn.desire2learn.com/resources/D2740436"]["http://purl.org/dc/terms/description"][0]["value"]
topics = data["http://asn.desire2learn.com/resources/D2740436"]["http://purl.org/gem/qualifiers/hasChild"]
topicDesc = data["http://asn.desire2learn.com/resources/S2743916"]
Here is the JSON file I'm using. http://s3.amazonaws.com/asnstaticd2l/data/rdf/D2742493.json I went through all the braces and can't figure out why I'm getting this error. Anyone know why I might be getting this?
topics = data["http://asn.desire2learn.com/resources/D2740436"]["http://purl.org/gem/qualifiers/hasChild"]
I don't see this key "http://asn.desire2learn.com/resources/D2740436" anywhere in your source file. You didn't include your stack, but my first thought would be typo resulting in a bad key and you getting an error like:
KeyError: "http://asn.desire2learn.com/resources/D2740436"
Which means that value does not exist in the data you are referencing
The link in your code and your AWS link go to very different files. Open up the link in your code in a web browser, and you will find that it's much shorter than the file on AWS. It doesn't actually contain the key you're looking for.
You say that you are using the linked file, in which the key "http://asn.desire2learn.com/resources/S2743916" turns up once.
However, your code is downloading a different file - one in which the key does not appear.
Try using the file you linked in your code, and you should see the key will work.

Open URL stored in a csv file

I'm almost an absolute beginner in Python, but I am asked to manage some difficult task. I have read many tutorials and found some very useful tips on this website, but I think that this question was not asked until now, or at least in the way I tried it in the search engine.
I have managed to write some url in a csv file. Now I would like to write a script able to open this file, to open the urls, and write their content in a dictionary. But I have failed : my script can print these addresses, but cannot process the file.
Interestingly, my script dit not send the same error message each time. Here the last : req.timeout = timeout
AttributeError: 'list' object has no attribute 'timeout'
So I think my script faces several problems :
1- is my method to open url the right one ?
2 - and what is wrong in the way I build the dictionnary ?
Here is my attempt below. Thanks in advance to those who would help me !
import csv
import urllib
dict = {}
test = csv.reader(open("read.csv","rb"))
for z in test:
sock = urllib.urlopen(z)
source = sock.read()
dict[z] = source
sock.close()
print dict
First thing, don't shadow built-ins. Rename your dictionary to something else as dict is used to create new dictionaries.
Secondly, the csv reader creates a list per line that would contain all the columns. Either reference the column explicitly by urllib.urlopen(z[0]) # First column in the line or open the file with a normal open() and iterate through it.
Apart from that, it works for me.

Categories

Resources