I want to post a carousel of n images using python. I have found various sources on how to upload a single picture with a caption by using libraries like InstaBot, but I could not find any source on how to do this with a carousel, if it is at all possible.
I have all the files stored locally and know how to get the filenames and everything within my script. I want to be able to set the order of the images as well. There tend to be 5 images, so the maximum of 10 will never be surpassed.
Anybody know if this is possible and if yes, how do I achieve it?
as far as my knoledge no library has this option,But it is possible if u use API on your own
update: check instagrapi library
guys come on why -2 reputation, i am just trying to help as far as i know, i need reputation to comment on answers guys
Related
I want to link 2 web pages made using streamlit and hosted using Heroku. One of them removes the image background and the other does an image classification task. (They could not be made into one due to slug size limits of Heroku). At present, the user has to manually download the segmented image from one webpage and upload it to the second webpage.
The 2 can be shown together using an HTML iframe tag but I am not able to figure out how to transfer the segmented image from one webpage to other.
Any suggestion or help will be appreciated
Also please prefer solutions using python and its frameworks as the whole project is in python and learning javascript, HTTP, etc will take some time.
(but if it's not possible using python, answers using other methods will also be welcome)
One of my seniors advised me to explore other hosting options. (I had seen an online tutorial using Heroku , and did not know much about others). It turns out that streamlit cloud provides a much larger slug size and allows you to host it for free if you open source your project(I had no issue doing so), so I have combined the two parts and am now hosting using streamlit cloud.
So, I am trying to print out gifs by using Tenor API.
I want it to only print one gif link but it prints out everything any Idea how to fix this?
Thank you.
https://i.stack.imgur.com/xf084.png
Sadly, I can not tell you the exact problem you are having, I replicated your code and used the official API Docs here
From what I can tell, this is one GIF just in a lot of different formats.
You can filter them like so:
print(top_8gifs['weburl'])
or
print(top_8gifs['results'][0])
EDIT: Looking at your .png (please embed it as code in the future) this should work for you, if you want the url:
print(top_8gifs[0]['url'])
A Python dict you can select using the key (like gifs['weburl'])
A Python list you have to select by index so gifs[0]
Using these techniques you can gather the data you need from that output.
I am aware that this can't be done with bash script only, or it isn't as far as I know (and I'm still learning). This is why I'm asking for help. What do I need more ? Are there specific tools ?
This is what I'd like to do:
Upload an image to https://www.google.com/searchbyimage/upload
Then find all the identical images
Download the one which has the greatest resolution
So far I've been able to upload an image to Searchbyimage through curl. This uploaded image then creates a very long token that is used to search similar images, with some supplementary keywords.
The uploaded image creates a link composed like so:
https://www.google.com/search?tbs=sbi:
After this is the awfully long token: AMhZZith3JfR2OzwmuyQjufBifvdFWNjMShRMypWIE2-g005QfYLeTATLhGHAWz8MLI-tbgHzZp-bREPlJbsNWhY7U4Z2_19bu0oHII6VJPIVVJSPANODqnrJXp6X5VKKoXHMLcBCmI9eIpxS_1EX9g9YJPFL2XFEfJqIApLX83erP5mlRM7rSiIF5Te_1RPNyVkp4IPZPBRtoOKGhpDw2xad-JZsqd2ai4F5sMvyO2A_18PMFKg21nTRH_1jVeOeUhz8U5zkL4lycIg3kafAYlNy8YwmjSFcmc2nZB_10t9MFyi2BnBmemDRp4DCACI0FVM6pLTIB8VCBpU9A
And it adds this at the end: &hl=fr.
Finally the image is searched, and I have the choice between clicking "similar images" or "all sizes" (it's "all sizes" I want, as similar images doesn't ensure it will be identical). This will add some keywords from google's analysis of the picture (here, a photography of Émile Zola) and create a second token:
The picture I searched here
https://www.google.com/search?safe=strict&hl=fr&
q=emile+zola&tbm=isch
&tbs=simg:
CAQSmQEJthA57uIOXdcajQELEKjU2AQaBggXCD0IQgwLELCMpwgaYgpgCAMSKLQZ9QH3BLMZ2A6xGdcO3w70Ad0OwjrEOqEuwzqiLsE67iSTLoM4oC4aMIk1iw7XQn7Wu55hLB2k-bnfW3_1yf24eA0N-w-baKvWkDj48J67yZZS-uQ-BgjCRQyAEDAsQjq7-CBoKCggIARIEnfZWUgw&sa=X&ved=0ahUKEwi965ashtrhAhWI3eAKHSmRCBwQ2A4IKygB
&biw=1920&bih=944
With at the end the resolution of the picture. The idea is to recreate this second link, to then download the highest resolution image amongst what google has found. I have to get the token, but everything else can be found on the picture file itself: the file is properly named after the picture, and thus could make for keywords, and its resolution is also easily known. I'd like to make it a script, to download higher resolution images of many paintings - over a thousand - that are in low quality. Ideally I'd use it quite often. So far I had found how to upload a picture with curl, and it had gave me back a token, but uncomplete. Beyond this, I was completely lost.
In theory this doesn't seem impossible. The problem is I'm too much of a newbie: I enjoy a lot so far Linux and bash, but I only know so few. I have of course done some hours of googling before, nothing showed up that I knew I could use. There is nothing alike neither on github: a lot of scripts that search for similar images, but none for identical. None of them that also compares the sizes of these images. There's also a python API for reverse image searching, but it didn't seem like it could search for identical images, and it seems related to the google API, which is problematic. All of this is probably dumbly hard for me because I'm only a beginner, and I don't know enough to build this script: but in another way - maybe due to my lack of knowledge - it doesn't seem impossible at all, and I'm very willing to try, fail, try again: learn. So here I am, to ask: how do I do that ? Can it be done in bash only ? If not, what must I include ? Or perhaps it cannot be done ?
Lastly, I know there is a google API for reverse image searching. That'd be very useful, if it wasn't limited to a hundred image searches a day: if you want more, you've got to pay. And by a 100 images a day, it'd take me around eleven days to reverse search all the images I wanted in a better quality: in the end, I'd be done as fast by searching all that myself, by hand. But neither these options seems to be a solution: and this script doesn't seem impossible. It is only beyond my current capacities.
Thank you in advance, if anyone has got an idea !
PS: I can use linux wether through WSL, or a virtual machine. Both work very fine so far, including whatever command or package. WSL is much faster. And sorry for my english, I'm french !
Second PS: I've been asked to show what I had as code, but this doesn't get beyond this:
curl -i -F sch=sch -F encoded_image=#path/to/my/imagefile.jpg https://www.google.com/searchbyimage/upload
Which was a partial answer to my question I had found here:
How to use google search by image in curl
There's two fundamental ways to use the web programmatically:
via API: this is purpose built for computers to access web resources and always preferred. You follow strict rules and get well defined results back.
by crawling: this is when the computer pretends to be a user, emulating the clicking on links done in a browser. Basically curl, but over and over again with state stored in between, parameters generated correctly, encoding applied, etc.
As you say, there's an API available so if it does what you want then it's the right way to go. The fact that it does what you want, but enforces limits, is a very useful sign that was you're trying to do has limits. Those limits will have been carefully set to incentivise you to work within them. Trying to crawl for the same results will likely either breach Google's service term limits, or your sanity limits.
So if you really want to work around the API, then use a crawler library such as Python Scrapy. But note that the API limits might be a useful indication of how far you can expect to get without paying.
Hi im new in python programming,
currently im in a project which need to find distance between 2 points (lat&lon) offline.
I know google maps provide this service but i cant use it since it has a limit for free account.
So, im googling around and find pyroutelib2 can do this for me with using openstreetmap map data.
pyroutelib link
and now im kinda stuck. im running on Windows 8 x64. my python is 2.7.
i have downloaded pyroutelib from this link
http://svn.openstreetmap.org/applications/routing/pyroutelib2/
and have my country map (osm.bz2 file) ready. the problem is, while i type the command
loadosm.py f:\asia.osm car
loadosm.py f:\asia.osm.bz2 car
loadosm.py f:\asia.osm.pbf car
(the osm file is in different directory)
in my console, the osm file wont be loaded and returning this message:
Loaded 0 nodes
Loaded 0 cycle routes
Searching for node: found None
anybody please help me. Thanks
I get the same output. Either pyroutelib2 or its documentation is broken.
I suggest to just use another routing library/tool. See the OSM wiki about routing as well as the list of online routers and offline routers. There are lots of interesting solutions available.
Check out osmapi, it's what I've used to get OSM files and import them into pyroutelib2. I don't know if that will solve your problems, but I've had luck going that route.
I am trying to rename the images in my massive pictures folder by searching google images by each image and naming them the result next to "Best guess for this image: ". I understand that google does have a python API but I am unsure if it can be used in this way, or if that is a reasonable project for someone of my limited experience.
https://developers.google.com/appengine/docs/python/images/usingimages#Uploading seems to be helpful but I'm not sure I understand what I need to be doing conceptually.
Another option is to use the drag-and-drop feature but I have not looked into that as much.
Thanks in advance for any guidance.
As far as I know, Google still doesn't offer a public API for its reverse image search service (i.e. you send a picture and get textual search results).
The most popular alternative that I know is TinEye ( http://www.tineye.com/ ). Here's a link to their RESTful API: http://services.tineye.com/TinEyeAPI