Push a raw value to Firebase via REST API - python

I am trying to use the requests library in Python to push data (a raw value) to a firebase location.
Say, I have urladd (the url of the location with authentication token). At the location, I want to push a string, say International. Based on the answer here, I tried
data = {'.value': 'International'}
p = requests.post(urladd, data = sjson.dumps(data))
I get <Response [400]>. p.text gives me:
u'{\n "error" : "Invalid data; couldn\'t parse JSON object, array, or value. Perhaps you\'re using invalid characters in your key names."\n}\n'
It appears that they key .value is invalid. But that is what the answer linked above suggests. Any idea why this may not be working, or how I can do this through Python? There are no problems with connection or authentication because the following works. However, that pushes an object instead of a raw value.
data = {'name': 'International'}
p = requests.post(urladd, data = sjson.dumps(data))
Thanks for your help.

The answer you've linked is a special case for when you want to assign a priority to a value. In general, '.value' is an invalid name and will throw an error.
If you want to write just "International", you should write the stringified-JSON version of that data. I don't have a python example in front of me, but the curl command would be:
curl -X POST -d "\"International\"" https://...

Andrew's answer above works. In case someone else wants to know how to do this using the requests library in Python, I thought this would be helpful.
import simplejson as sjson
data = sjson.dumps("International")
p = requests.post(urladd, data = data)
For some reason I had thought that the data had to be in a dictionary format before it is converted to stringified JSON version. That is not the case, and a simple string can be used as an input to sjson.dumps().

Related

JSON Parsing with python from Rethink database [Python]

Im trying to retrieve data from a database named RethinkDB, they output JSON when called with r.db("Databasename").table("tablename").insert([{ "id or primary key": line}]).run(), when doing so it outputs [{'id': 'ValueInRowOfid\n'}] and I want to parse that to just the value eg. "ValueInRowOfid". Ive tried with JSON in Python, but I always end up with the typeerror: list indices must be integers or slices, not str, and Ive been told that it is because the Database outputs invalid JSON format. My question is how can a JSON format be invalid (I cant see what is invalid with the output) and also what would be the best way to parse it so that the value "ValueInRowOfid" is left in a Operator eg. Value = ("ValueInRowOfid").
This part imports the modules used and connects to RethinkDB:
import json
from rethinkdb import RethinkDB
r = RethinkDB()
r.connect( "localhost", 28015).repl()
This part is getting the output/value and my trial at parsing it:
getvalue = r.db("Databasename").table("tablename").sample(1).run() # gets a single row/value from the table
print(getvalue) # If I print that, it will show as [{'id': 'ValueInRowOfid\n'}]
dumper = json.dumps(getvalue) # I cant use `json.loads(dumper)` as JSON object must be str. Which the output of the database isnt (The output is a list)
parsevalue = json.loads(dumper) # After `json.dumps(getvalue)` I can now load it, but I cant use the loaded JSON.
print(parsevalue["id"]) # When doing this it now says that the list is a str and it needs to be an integers or slices. Quite frustrating for me as it is opposing it self eg. It first wants str and now it cant use str
print(parsevalue{'id'}) # I also tried to shuffle it around as seen here, but still the same result
I know this is janky and is very hard to comprehend this level of stupidity that I might be on. As I dont know if it is the most simple problem or something that just isnt possible (Which it should or else I cant use my data in the database.)
Thank you for reading this through and not jumping straight into the comments and say that I have to read the JSON documentation, because I have and I havent found a single piece that could help me.
I tried reading the documentation and watching tutorials about JSON and JSON parsing. I also looked for others whom have had the same problems as me and couldnt find.
It looks like it's returning a dictionary ({}) inside a list ([]) of one element.
Try:
getvalue = r.db("Databasename").table("tablename").sample(1).run()
print(getvalue[0]['id'])

Google Business Profile API readMask

After the deprecation of my discovery url, I had to make some change on my code and now I get this error.
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://mybusinessbusinessinformation.googleapis.com/v1/accounts/{*accountid*}/locations?filter=locationKey.placeId%3{*placeid*}&readMask=paths%3A+%22locations%28name%29%22%0A&alt=json returned "Request contains an invalid argument.". Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'read_mask', 'description': 'Invalid field mask provided'}]}]">
I am trying to use this end point accounts.locations.list
I'm using :
python 3.8
google-api-python-client 2.29.0
My current code look likes :
from google.protobuf.field_mask_pb2 import FieldMask
googleAPI = GoogleAPI.auth_with_credentials(client_id=config.GMB_CLIENT_ID,
client_secret=config.GMB_CLIENT_SECRET,
client_refresh_token=config.GMB_REFRESH_TOKEN,
api_name='mybusinessbusinessinformation',
api_version='v1',
discovery_service_url="https://mybusinessbusinessinformation.googleapis.com/$discovery/rest")
field_mask = FieldMask(paths=["locations(name)"])
outputLocation = googleAPI.service.accounts().locations().list(parent="accounts/{*id*}",
filter="locationKey.placeId=" + google_place_id,
readMask=field_mask
).execute()
From the error, i tried a lot of fieldmask path and still don't know what they want.
I've tried things like location.name, name, locations.name, locations.location.name and it did'nt work.
I also try to pass the readMask params without use the FieldMask class with passing a string and same problem.
So if someone know what is the format of the readMask they want it will be great for me !
.
Can help:
https://www.youtube.com/watch?v=T1FUDXRB7Ns
https://developers.google.com/google-ads/api/docs/client-libs/python/field-masks
You have not set the readMask correctly. I have done a similar task in Java and Google returns the results. readMask is a String type, and what I am going to provide you in the following line includes all fields. You can omit anyone which does not serve you. I am also writing the request code in Java, maybe it can help you better to convert into Python.
String readMask="storeCode,regularHours,name,languageCode,title,phoneNumbers,categories,storefrontAddress,websiteUri,regularHours,specialHours,serviceArea,labels,adWordsLocationExtensions,latlng,openInfo,metadata,profile,relationshipData,moreHours";
MyBusinessBusinessInformation.Accounts.Locations.List request= mybusinessaccountLocations.accounts().locations().list(accountName).setReadMask(readMask);
ListLocationsResponse Response = request.execute();
List<Location>= Response.getLocations();
while (Response.getNextPageToken() != null) {
locationsList.setPageToken(Response.getNextPageToken());
Response=request.execute();
locations.addAll(Response.getLocations());
}
-- about the question in the comment you have asked this is what I have for placeId:
As other people said you before, the readmask is not set correctly.
Here Google My Business Information api v1 you can see:
"This is a comma-separated list of fully qualified names of fields"
and here Google Json representation you can see the field.
With this, I tried this
read_mask = "name,title"
and it worked for me, hope this work for you too.

How to separate data in a Restful API?

I am working on a program that reads the content of a Restful API from ImportIO. The connection works, and data is returned, but it's a jumbled mess. I'm trying to clean it to only return Asins.
I have tried using the split keyword and delimiter to no success.
stuff = requests.get('https://data.import.io/extractor***')
stuff.content
I get the content, but I want to extract only Asins.
results
While .content gives you access to the raw bytes of the response payload, you will often want to convert them into a string using a character encoding such as UTF-8. the response will do that for you when you access .text.
response.txt
Because the decoding of bytes to str requires an encoding scheme, requests will try to guess the encoding based on the response’s headers if you do not specify one. You can provide an explicit encoding by setting .encoding before accessing .text:
If you take a look at the response, you’ll see that it is actually serialized JSON content. To get a dictionary, you could take the str you retrieved from .text and deserialize it using json.loads(). However, a simpler way to accomplish this task is to use .json():
response.json()
The type of the return value of .json() is a dictionary, so you can access values in the object by key.
You can do a lot with status codes and message bodies. But, if you need more information, like metadata about the response itself, you’ll need to look at the response’s headers.
For More Info: https://realpython.com/python-requests/
What format is the return information in? Typically Restful API's will return the data as json, you will likely have luck parsing the it as a json object.
https://realpython.com/python-requests/#content
stuff_dictionary = stuff.json()
With that, you can load the content is returned as a dictionary and you will have a much easier time.
EDIT:
Since I don't have the full URL to test, I can't give an exact answer. Given the content type is CSV, using a pandas DataFrame is pretty easy. With a quick StackOverflow search, I found the following answer: https://stackoverflow.com/a/43312861/11530367
So I tried the following in the terminal and got a dataframe from it
from io import StringIO
import pandas as pd
pd.read_csv(StringIO("HI\r\ntest\r\n"))
So you should be able to perform the following
from io import StringIO
import pandas as pd
df = pd.read_csv(StringIO(stuff.content))
If that doesn't work, consider dropping the first three bytes you have in your response: b'\xef\xbb\xf'. Check the answer from Mark Tolonen to get parse this.
After that, selecting the ASIN (your second column) from your dataframe should be easy.
asins = df.loc[:, 'ASIN']
asins_arr = asins.array
The response is the byte string of CSV content encoded in UTF-8. The first three escaped byte codes are a UTF-8-encoded BOM signature. So stuff.content.decode('utf-8-sig') should decode it. stuff.text may also work if the encoding was returned correctly in the response headers.

How do I get notes info from posts with python/tumblr api?

I am so confused by all these levels of dicts I have to wade through it would be easier imo just to do it by scraping, however I guess it's a good excercise to learn dicts and will be quicker perhaps once I figure it out.
My code is as follows, where the assignment statement for cposts returns a 404:
import pytumblr
# Authenticate via OAuth
client = pytumblr.TumblrRestClient(
'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
)
f = client.followers('blog.tumblr.com')
users = f['users']
names = [b['name'] for b in f['users']]
print names
cposts = client.posts(names[0], 'notes_info=True')
print (cposts)
But the python api info says: client.posts('codingjester', **params) # get posts for a blog
and this SO post (Getting more than 50 notes with Tumblr API) says you should use notes_info to get the notes. But I don't know how to construct it in python rather than making a url.
I could use a request constructing a url but I figure there is a simpler way using the python/tumblr api I just haven't figured it out, if someone could illuminate please.
Remove the quotes around notes_info=True. You should be passing the value True to the notes_info argument of the client's posts() method. Instead, you're actually passing the string 'notes_info=True' as a positional argument, which is invalid and causes pytumblr to create an invalid URL, which is why you're getting a 404.

Creating a nested JSON request with Python

A user needs to pass a json object as a part of the request. It would look something like this:
{"token" :"ayaljltja",
"addresses": [
{'name':'Home','address':'20 Main Street',
'city':'new-york'},
{'name':'work', 'address':'x Street', 'city':'ohio'}
]}
I have two problems right now. First, I can't figure out how to test this code by recreating the nested POST. I can successfully POST a dict but posting the list of addresses within the JSON object is messing me up.
Simply using cURL, how might I do this? How might I do it with urrlib2?
My second issue is then deserializing the JSON POST object on the server side. I guess I just need to see a successful POST to determine the input (and then deserialize it with the json module).
Any tips?
First make sure your JSON is valid. Paste it into the JSONLint web page.
Currently your JSON has two issues:
there is no comma between "token" :"ayaljltja" and "addresses": [...]
a single quote is not a valid way of delimiting a JSON string, replace them all with double quotes.
With command line curl, save your JSON to a file, say data.json. Then try: curl -X POST -d #data.json http://your.service.url
It's also possible to enter the JSON directly to the -d parameter but (as it sounds like you know already) you have to get your quoting and escaping exactly correct.

Categories

Resources