I haven't found a similar question that I'm looking for Image Optimization.
I've tested how much Facebook can optimize the image uploaded:
980KB --> 77KB
846KB --> 62.1KB
From what I found out, Facebook is capable of optimizing the image up to 10 times while still pertaining some minimum image quality, as for the test above.
So, can anyone share what are the best ways that you have implemented to optimize image uploaded by user ?
When I searched in internet, I've seen some websites offer paid service for image optimization. However, we prefer not to subscribe for any paid service for image optimization at this stage.
I'm developing the project with Python language within Google App Engine environment. Any part where we can reuse from Python libraries or even Google App Engine libraries to achieve so ?
Probably you should star this issue to get pngcrush like functionality added to the AppEngine images API.
Basic optimization boils down to:
Choosing the appropriate format for the image (usually jpeg for
photographs; you can use jpeg across the board if you're not
concerned about image quality but otherwise png for screenshots etc.
may be wise)
Reducing the image to the smallest resolution appropriate for your
application
Increasing the compression level to the highest level possible while
maintaining your quality standards
You can also nitpick by stripping extraneous metadata, but that is usually unnecessary and not desirable.
If you want to do all of this in an automated fashion, you'll have to set a standard format and compression level across the board and accept that it won't be perfect in all cases, or else be able to determine what settings are appropriate for the image programmatically (which is quite difficult, unless you simply ask your users at upload time directly).
Normally I would use ImageMagick via the PythonMagick bindings for this task, but that may not be feasible on Google Apps Engine. In that case, maybe look at the Python Imaging Library.
Another solution is to use a 3rd party api, in this case you can use tinyPNG. There compression algorithm is probably one of the best out there. Check there developer guide here ~>
https://tinypng.com/developers
The first 500 photos per month are free & it's like $0.009 per image (> 500 && < 9500) or $0.002 > 10000 images.
You can't use PythonMagick unfortunately. But Python Imaging Library can be installed, and see Google Imaging Service on how to use it.
There is no magic bullet facebookesque optimization. You will have to try to develop your own that meets the standards you need. Most images these days are 5mp and up resizing them to 1280x720 or less is normal in web sites. The ability to crop extraneous image is also desirable before resizing.
Related
I want to link 2 web pages made using streamlit and hosted using Heroku. One of them removes the image background and the other does an image classification task. (They could not be made into one due to slug size limits of Heroku). At present, the user has to manually download the segmented image from one webpage and upload it to the second webpage.
The 2 can be shown together using an HTML iframe tag but I am not able to figure out how to transfer the segmented image from one webpage to other.
Any suggestion or help will be appreciated
Also please prefer solutions using python and its frameworks as the whole project is in python and learning javascript, HTTP, etc will take some time.
(but if it's not possible using python, answers using other methods will also be welcome)
One of my seniors advised me to explore other hosting options. (I had seen an online tutorial using Heroku , and did not know much about others). It turns out that streamlit cloud provides a much larger slug size and allows you to host it for free if you open source your project(I had no issue doing so), so I have combined the two parts and am now hosting using streamlit cloud.
enter image description here
I need to get the text from a few images where there is a meter and I need to read the text from lcd of the meter. i have tried several ways but to no success.
Use Google Cloud Platform(GCP). You have a lot of APIs for computer vision.
In the OCP they also provide API to Detect text in Image.
Check the link below for a detailed description of how it works.
GCP OCR Documentation
If this is not what you are looking for then ask your question more descriptive about what exactly the platform you are working and what are all the things you tried so far.
I agree with what #AntoPravin has answered. Also for the answer to your comment, I'd like to inform you that GCP detection is way more powerful than Microsoft vision API. I've personally compared Google's vision API, Microsoft vision API, and tesseract, and GCP is miles ahead of both these two. GCP is able to detect almost everything that you can see with your naked eye.
I tried GCP on your image. These are the results and as you can see, I'm able to get the reading of the meter. Getting the numerical value from this text is not a problem. You can use regex for that.
LOBAT
15.4
mv
SHUNT
ATXP 010
ON
LOBAT 15.4 mv SHUNT ATXP 010 ON
I am aware that this can't be done with bash script only, or it isn't as far as I know (and I'm still learning). This is why I'm asking for help. What do I need more ? Are there specific tools ?
This is what I'd like to do:
Upload an image to https://www.google.com/searchbyimage/upload
Then find all the identical images
Download the one which has the greatest resolution
So far I've been able to upload an image to Searchbyimage through curl. This uploaded image then creates a very long token that is used to search similar images, with some supplementary keywords.
The uploaded image creates a link composed like so:
https://www.google.com/search?tbs=sbi:
After this is the awfully long token: AMhZZith3JfR2OzwmuyQjufBifvdFWNjMShRMypWIE2-g005QfYLeTATLhGHAWz8MLI-tbgHzZp-bREPlJbsNWhY7U4Z2_19bu0oHII6VJPIVVJSPANODqnrJXp6X5VKKoXHMLcBCmI9eIpxS_1EX9g9YJPFL2XFEfJqIApLX83erP5mlRM7rSiIF5Te_1RPNyVkp4IPZPBRtoOKGhpDw2xad-JZsqd2ai4F5sMvyO2A_18PMFKg21nTRH_1jVeOeUhz8U5zkL4lycIg3kafAYlNy8YwmjSFcmc2nZB_10t9MFyi2BnBmemDRp4DCACI0FVM6pLTIB8VCBpU9A
And it adds this at the end: &hl=fr.
Finally the image is searched, and I have the choice between clicking "similar images" or "all sizes" (it's "all sizes" I want, as similar images doesn't ensure it will be identical). This will add some keywords from google's analysis of the picture (here, a photography of Émile Zola) and create a second token:
The picture I searched here
https://www.google.com/search?safe=strict&hl=fr&
q=emile+zola&tbm=isch
&tbs=simg:
CAQSmQEJthA57uIOXdcajQELEKjU2AQaBggXCD0IQgwLELCMpwgaYgpgCAMSKLQZ9QH3BLMZ2A6xGdcO3w70Ad0OwjrEOqEuwzqiLsE67iSTLoM4oC4aMIk1iw7XQn7Wu55hLB2k-bnfW3_1yf24eA0N-w-baKvWkDj48J67yZZS-uQ-BgjCRQyAEDAsQjq7-CBoKCggIARIEnfZWUgw&sa=X&ved=0ahUKEwi965ashtrhAhWI3eAKHSmRCBwQ2A4IKygB
&biw=1920&bih=944
With at the end the resolution of the picture. The idea is to recreate this second link, to then download the highest resolution image amongst what google has found. I have to get the token, but everything else can be found on the picture file itself: the file is properly named after the picture, and thus could make for keywords, and its resolution is also easily known. I'd like to make it a script, to download higher resolution images of many paintings - over a thousand - that are in low quality. Ideally I'd use it quite often. So far I had found how to upload a picture with curl, and it had gave me back a token, but uncomplete. Beyond this, I was completely lost.
In theory this doesn't seem impossible. The problem is I'm too much of a newbie: I enjoy a lot so far Linux and bash, but I only know so few. I have of course done some hours of googling before, nothing showed up that I knew I could use. There is nothing alike neither on github: a lot of scripts that search for similar images, but none for identical. None of them that also compares the sizes of these images. There's also a python API for reverse image searching, but it didn't seem like it could search for identical images, and it seems related to the google API, which is problematic. All of this is probably dumbly hard for me because I'm only a beginner, and I don't know enough to build this script: but in another way - maybe due to my lack of knowledge - it doesn't seem impossible at all, and I'm very willing to try, fail, try again: learn. So here I am, to ask: how do I do that ? Can it be done in bash only ? If not, what must I include ? Or perhaps it cannot be done ?
Lastly, I know there is a google API for reverse image searching. That'd be very useful, if it wasn't limited to a hundred image searches a day: if you want more, you've got to pay. And by a 100 images a day, it'd take me around eleven days to reverse search all the images I wanted in a better quality: in the end, I'd be done as fast by searching all that myself, by hand. But neither these options seems to be a solution: and this script doesn't seem impossible. It is only beyond my current capacities.
Thank you in advance, if anyone has got an idea !
PS: I can use linux wether through WSL, or a virtual machine. Both work very fine so far, including whatever command or package. WSL is much faster. And sorry for my english, I'm french !
Second PS: I've been asked to show what I had as code, but this doesn't get beyond this:
curl -i -F sch=sch -F encoded_image=#path/to/my/imagefile.jpg https://www.google.com/searchbyimage/upload
Which was a partial answer to my question I had found here:
How to use google search by image in curl
There's two fundamental ways to use the web programmatically:
via API: this is purpose built for computers to access web resources and always preferred. You follow strict rules and get well defined results back.
by crawling: this is when the computer pretends to be a user, emulating the clicking on links done in a browser. Basically curl, but over and over again with state stored in between, parameters generated correctly, encoding applied, etc.
As you say, there's an API available so if it does what you want then it's the right way to go. The fact that it does what you want, but enforces limits, is a very useful sign that was you're trying to do has limits. Those limits will have been carefully set to incentivise you to work within them. Trying to crawl for the same results will likely either breach Google's service term limits, or your sanity limits.
So if you really want to work around the API, then use a crawler library such as Python Scrapy. But note that the API limits might be a useful indication of how far you can expect to get without paying.
I got a practice task, which I can not get any further.
The task is the following:
Project Description The goal of the projects is to create a map view
similar to the Google Maps, where the user can see some imagery data
captured by drones.
User should be able to move around the map freely, as well as zoom in
and zoom out to take a closer look at the captured imagery data.
It is strongly desired that the served imagery data will support
transparency while minimizing the file size and bandwidth usage. This
does not have to be implemented, but solution ideas are welcomed.
The raw imagery data will be provided as GeoTIFF files. Imagery
visible on the map can be added by placing a file inside a directory
that is read by the server. Project Delivery Method
Project should be delivered as a Git repository with documentation
required to setup and run the project.
Requirements
1. Server implementation in Python 3.5+
2. Project must be able to run on Ubuntu Server 16.04
3. Optimal disk space usage for imagery data displayed to the user (as the app may be processing terabytes of satellite imagery data)
4. Relatively conservative bandwidth usage
Notes:
1. The project will be deployed on a machine that is already running other Python software. Dependency conflicts must be avoided.
(virtualenv, Docker)
2. The UI can be a simple HTML page with embedded libraries and inline scripts.
In addition, it was specified in an e-mail:
"The test task is not code but just the approach and rough app
architecture
```I'm attaching a tank spec. Like I've mentioned. I'm more interested
in problem-solving and your ideas. I expect a working prototype tough.
Use any libraries you wish to use. Create an elegant, easy to
understand the solution. You can use as much time as you want. Would be
great if you could deliver the code by git.... ```"
So far I have done:
Ubuntu as VM
Venv
Postgres and PostGIS installed (Django writes error-free in a database)
Django project and app created
Documentation up to this point
I have now integrated the geotiff via console and that seems to work too:
from django.contrib.gis.gdal import GDALRaster
raster = GDALRaster('base/static/base/geotiff/xto-site3-rgb.tif')
raster.name
Out[4]: 'base/static/base/geotiff/xto-site3-rgb.tif'
raster.width, raster.height
Out[5]: (23001, 9668)
In the models.py is so far:
from django.contrib.gis.db import models
class RasterBase(models.Model):
raster = models.RasterField()
name = models.TextField()
How does it work that I install the grid so that I can portray this in an html similar to google maps? If I understand correctly, I must now write the geotiff in the database, and read on from there, right?
Unfortunately, I find in the network largely only outdated stuff, or often examples, which is assumed by shapefiles. Should I convert the grid to a shapefile and continue like that?
So far, I only make small things in Django, like my own blog and a few statistics, but this with Geodjango is a bit fierce because I have to give it up, as it were tomorrow. Latest Tuesday morning.
I would be very grateful if someone could give me some tips. All in all, that's pretty important to me, and it would be a shame if I messed up half of the task (or the last third) of the task.
Django is version 2.0
The GeoTIFF ~900mb
Thanks for all. :-)
Late to the party, but maybe some people are searching for a solution here.
When you want to display geodata on a map, you can use a WebGIS framework like Openlayers or Leaflet. They provide all the functionality to move the map and zoom in / out.
I would not recommend to store large raster data in a database. You can serve it directly from a file server via a TileLayer or use a XYZ tiling structure to minimize the bandwidth usage.
Openlayers has a lot of examples on how to server GeoTiff files.
Separation of concerns is important for making code maintainable. Anybody aware of already built frameworks that implement perhaps an MVC approach for image rendering with for example PIL (or pillow)
It is probably not too hard to come up with such a framework, but any existing best practices would help navigate the waters of repeating mistakes, etc.
Edit: To be clear, my request is regarding creating a new image by combining and overlaying other images. Perhaps the analogy can be a framework like django which uses models to generate html pages ... similarly this framework or architecture would allow for generating dynamic png's from a dynamic dataset.
Here's a resource that provides many examples of server-side image resizing.
https://github.com/adamdbradley/foresight.js/wiki/Server-Resizing-Images
I'm not sure if any of them would meet the needs of your project as is, but I'm sure there's some take aways from their implementations that you can apply to your own project.