How to get data which have a specific child key using Pyrebase - python

I'm using Pyrebase to access my Firebase database. My database is currently structured like so:
- users
- 12345
name: "Kevin"
company: "Nike"
Where 12345 is the user's id, and the company is the company that the user belongs to. I'm currently trying to get all the users that belong to Nike. According to the Pyrebase docs, doing something like this should work:
db.child("users").order_by_child("company").equal_to("Nike").get().val()
but I'm getting the error "error" : "orderBy must be a valid JSON encoded path". Does anyone know why this might be the case?

There is something wrong with the Pyrebase library. Here's a link to the problem.
The solution is to add these lines of code in your app.
# Temporarily replace quote function
def noquote(s):
return s
pyrebase.pyrebase.quote = noquote

I managed to fix this problem, since I'm also using rest api to connect with my firebase realtime database. I'll demonstrate where the error lies with examples:
When I don't put wrap the orderBy value (child, key, etc) and other queries parameters with commas, retrofit (which I'm using) gives me error/bad request.
Here's the error/bad request url:
https://yourfirebaseprojecturl.com/Users.json?orderBy=username&startAt=lifeofkevin
See, both the orderBy value and startAt value, in this case, username and lifeofkevin, are not wrapped with commas, like this "username" and "lifeofkevin", so it will return orderBy must be a valid JSON encoded path.
In order to work, I need to wrap my orderBy and other query parameters, with commas, so that Firebase returns the data, you want to work with.
Here's the second example, the correct one:
https://yourfirebaseprojecturl.com/Users.json?orderBy="username"&startAt="gang"
Now notice, the difference? Both values of orderBy and startAt are wrapped with commas so now they'll return the data you want to work with.

Related

Problem with updating NULL values in Salesforce using Python

I am trying to upsert the user data to salesforce using a python patch request. I have a dataset in the form of dataframe that consists of several null values. While trying to upsert the data to salesforce it throws an error that,
{'message': 'Deserializing the instance of date from VALUE_STRING, value nan, cannot be executed or the request is missing a required field at [line:1, column:27]', 'errorCode': 'JSON_PARSER_ERROR'}
To resolve the error I have tried to replace the values using None and also, Null as mentioned in the below code. Still, I receive the same error.
df_1.fillna(value=None,method = None,inplace=True)
df_1 = df_1.replace(np.NaN,"null")
The error then is :
{'message': 'Deserializing the instance of date from VALUE_STRING, value null, cannot be executed or the request is missing a required field at [line:1, column:27]', 'errorCode': 'JSON_PARSER_ERROR'}
Any possible leads would be immensely helpful
You'll need to find a way to inspect the final JSON generated just before it's sent out.
You can use workbench to experiment (there's "Utilities -> REST Explorer" after you log in) and over normal REST API the update is a straightforward operation
PATCH {your instance url}/services/data/v55.0/sobjects/Account/0017000000Lg8Wh
(put your account id) with body (either form works)
{
"BillingCity" : null,
"BillingCountry": ""
}
should clear the fields. A "null" won't work, it counts as a string.
====
If you're using "Bulk API 2.0" (https://trailhead.salesforce.com/content/learn/modules/api_basics/api_basics_bulk, I think you'd notice it's different, asynchronous dance of intialise job, upload data, start processing, periodically check "is it done yet"...) for JSON format null should work too, for XML you need special tag and if your format is CSV - it's supposed to be #N/A
Try
df.replace({np.nan: None}, inplace=True)
This is the equivalent of submitting a Null or Empty String value to Salesforce

Django Database querying

I'm having an issue here that I want clarification with, see, I'm making a program that does analysis of data.
I wanted to query data from different users, the data is numerical by the way, whenever I'd get subject marks from a user, I want the system to return the name of the user who has those marks.
It was all working fine until I tried querying the users with the same marks, and all I could get was an error
analyzer.models.ResultsALevel.MultipleObjectsReturned: get() returned more than one
ResultsALevel -- it returned 4!
So I was trying to find a way to still query and return the name of users with that subject mark, even if they have the same marks.
I believe that should be possible since the users have different id's and stuff, help would be much appreciated!
Here's my views.py
biology_marks = []
for student in ResultsALevel.objects.all():
biology_marks.append(student.advanced_biology_1)
value_1_biology =
ResultsALevel.objects.get(advanced_biology_1=biology_marks[-1]).student_name.last_name
value_2_biology =
ResultsALevel.objects.get(advanced_biology_1=biology_marks[-2]).student_name.last_name
value_3_biology =
ResultsALevel.objects.get(advanced_biology_1=biology_marks[-3]).student_name.last_name
value_4_biology =
ResultsALevel.objects.get(advanced_biology_1=biology_marks[-4]).student_name.last_name
ObjectManager.get() is used to retrieve single instances, usually selected by a primary key field.
Using ObjectManager.get() on a field where multiple data exists which matches the query, an error is returned (MultipeObjectsReturned)
Use ObjectManager.filter() in stead.

How to search for values in TinyDB

I would like to search my database for a value. I know you can do this for a key with db.search() but there does not seem to be any kind of similar function for searching for a value
I've tried using the contains() function but I have the same issue. It checks if the key is contained in the database. I would like to know if a certain value is contained in the database.
I would like to do something like this that would search for values in tinydb
db.search('value')
If I was able to execute the above command and get the value(if it does exist) or Nothing if it doesn't that would be ideal. Alternatively, if the able returned True or False accordingly, that would be fine as well
I don't know if this is what you are looking for but which the following command you can check for a specific field value:
from tinydb import Query
User = Query()
db.search(User.field_name == 'value')
I'm new here (doing some reading to see if TinyDB would even be applicable for my use case) so perhaps wrong, and also aware that this question is a little old. But I wonder if you can't address this by iterating over each field and searching within for your value. Then, you couldget the key or field wherein a value match was located.

CQL update query not working using Python

I am trying to update new password after reset to cassandra db. This is the query I have written where both username and password fields are dynamic. Is this right?
def update_db(uname, pwd):
query = session.prepare('update user.userdetails set "password"=%s where "username" = ? ALLOW FILTERING', pwd)
session.execute(query, (uname,))
update_db(username, new_pwd)
I am calling this through an API. But it doesn't seem to update.
Alex is absolutely correct in that you need to provide the complete PRIMARY KEY for any write operation. Remove ALLOW FILTERING and your query should work as long as your primary key definition is: PRIMARY KEY (username).
Additionally, it's best practice to parameterize your entire prepared statement, instead of relying on string formatting for password.
query = session.prepare('update user.userdetails set "password"=? where "username"=?')
session.execute(query,[pwd,uname])
Note: If at any point you find yourself needing the ALLOW FILTERING directive, you're doing it wrong.
for updating record you need to provide primary key(s) completely. It will not work with ALLOW FILTERING - you need first to get all primary keys that you want to update, and then issue individual update commands. See the documentation for more detailed description of UPDATE command.
If you really want to specify the default value for some column - why not simply handle it with something like .get('column', 'default-value')?

python SPARQL query RESULTS BINDINGS: IF statement on BINDINGs value?

Using Python with SPARQLWrapper, JSON, urlib2 & cgi. Had trouble passing a working SPARQL query with some NULL values to python so I populated the blanks with a literal and will try to filter at the output. I have this results section example:
for result in results["results"]["bindings"]:
project = result["project"]["value"].encode('utf-8')
filename = result["filename"]["value"].encode('utf-8')
url = result["url"]["value"].encode('utf-8')
...and I print the %s. Is there a way to filter a value, i.e., IF VALUE NE "string" then PRINT? Or is there another workaround? I'm at the tail-end of a small project, I know I need a better wrapper, I just need to get these results filtered before I can move on. T very much IA...
I'm one of the developers of the SPARQLWrapper library, and the question had been already answered at the mailing list.
Regarding optionals values on the original query, the result set will come with no values for those variables. The problems is that we'd need to parse the query to populate such missing entries, and we want to avoid such parsing; therefore you'd need to check it for avoiding runtime problems with KeyError.
Usually I use a code like:
for result in results["results"]["bindings"]:
party = result["party"]["value"] if ("party" in result) else None

Categories

Resources