reduce time of save encodings face recognition - python

i have a dataset and i am saving the result of the encoding of the images for fave recognition in pickle object.
i would like to add new images or delete images in database and when i do it then prior images that exist in the database are stored in dataset_faces.dat and only for new images encode_faces.py be done.
I want to reduce the time to save the encoding in the encoding.pickle.
Otherwise, a lot of time should be spent even adding a new image.
encode_faces.py
import face_recognition
import numpy as np
import os
import pickle
known_person = []
known_image= []
known_face_encoding=[]
for file in os.listdir("Imagefolder"):
#Extracting person name from the image filename eg:Abhilash.jpg
known_person.append(str(file).replace(".jpg", ""))
file=os.path.join("Imagefolder", file)
known_image = face_recognition.load_image_file(file)
known_face_encoding.append(face_recognition.face_encodings(known_image)[0])
with open('dataset_faces.dat', 'wb') as f:
pickle.dump(known_face_encoding, f,pickle.HIGHEST_PROTOCOL)
with open('dataset_fac.dat', 'wb') as d:
pickle.dump(known_person, d)
print(known_face_encoding)
print(known_person)

Related

How to analyze multiple images in a folder using a loop?

I am using google cloud vision api in python.
I am a Python beginner. So, I am struggling to implement the content I want to analyze in Python code.
It's a simple thing, but if you can help me, I'd appreciate it.
I want to do label detection in Google Cloud Vision.
I've done loading a single image and implementing the code, but I want to run it on an entire folder containing multiple images.
file_name = r'img_3282615_1.jpg'
image_path = f'.\save_img\{file_name}'
with io.open(image_path, 'rb') as image_file:
content = image_file.read()
image = vision.Image(content=content)
response = client.label_detection(image=image, max_results=100)
labels = response.label_annotations
df = pd.DataFrame(columns=['description', 'score', 'topicality'])
for label in labels:
df = df.append(
dict(
description=label.description,
score=label.score,
topicality=label.topicality
), ignore_index=True)
print(df)
I've tried analyzing individual images using this code.
Here I would like to do the following steps.
Open the folder
Analyze label detection for all images in the folder(The image names are 'img_3282615_1.jpg', 'img_3282615_2.jpg', 'img_3282615_3.jpg', 'img_1115368_1.jpg', 'img_1115368_2.jpg' ...)
Saving the result as csv (image name, description, score)
I studied that it is possible to repeat using the for statement, but it is difficult to actually write in code. Because I'm just starting to deal with python and lack the basics.
Your answer can be of great help to me.
thank you:)
Can you try this:
from google.cloud import vision
import os
import csv
# Create a client for the Cloud Vision API
client = vision.ImageAnnotatorClient()
# Set the path to the folder containing the images
folder_path = './image_for_text/'
fields = ['description', 'score', 'topicality']
filename_CSV = "./z.csv"
list1=[]
with open(filename_CSV, 'a+') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(fields)
# Loop through all the files in the folder
for filename in os.listdir(folder_path):
# Check if the file is an image
if filename.endswith('.jpg') or filename.endswith('.png'):
# Build the full path to the image
file_path = os.path.join(folder_path, filename)
# Open the image file
with open(file_path, 'rb') as image_file:
# Read the image file into memory
content = image_file.read()
#Create a vision image from the binary data
image = vision.Image(content=content)
#Perform label detection on the image
response = client.label_detection(image=image)
labels = response.label_annotations
# Print the labels for the image
print(f'Labels for {filename}:')
for label in labels:
list1.append(f'{label.description}')
list1.append(f'{label.score*100:.2f}%')
list1.append(f'{label.topicality}')
print(list1)
with open(filename_CSV, 'a+') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(list1)
list1.clear()

How to process data before downloading using st.download_button with on_click callback?

I have an app running where my model gives result a np.ndarray and I'm showing the results as st.image(result_matrix). I want to add a functionality where I want to give my users the ability to download this image but the problem is that I have to convert to that to PIL.Image and send the buffer.getvalue() as input to this button. I can do this too but my users don't download very often and to save the computation power and load, I'm not converting EVERY result to PIL.Image.
Is there any functionality where you can download the data, after processing it, on demand?
I tried doing the below but gave me obvious error that it doesn't accept numpy array:
import streamlit as st
from PIL import Image
import numpy as np
from io import BytesIO
st.session_state['result'] = some_numpy_RGB_array
def process_image():
img = Image.fromarray(st.session_state['result'])
buffer = BytesIO()
img.save(buffer, format="jpeg")
st.session_state['result'] = buffer.getvalue()
_ = st.download_button(label="Download",data=st.session_state['result'],file_name="image.jpeg",mime="image/jpeg",on_click=process_image)
I'm only aware of the workaround given here:
import streamlit as st
def generate_report(repfn):
with open(repfn, 'w') as f:
f.write('Report')
st.write('done report generation')
if st.button('generate report'):
repfn = 'report.pdf'
generate_report(repfn)
with open(repfn, "rb") as f:
st.download_button(
label="Download report",
data=f,
file_name=repfn)
It's not ideal, because the user has to click two buttons: one to generate the (in your case) image, and a second one to actually download it. But I guess it's better than nothing.

Adding multiple numpy strings from folder of images to pickle database

I have a folder with multiple images that i want to convert to numpy string and insert it into a pickle database
So far i can add the numpy string of a single image into the database but i'm stuck adding others numpy strings into the database
This is the code to add a single image into the database
import numpy as np
import cv2
import pickle
path = "/path/to/image/test.jpg"
template = cv2.imread(path)
(tH, tW) = template.shape[:2]
templateGray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
data = {"encodings": templateGray, "names": "test"}
f = open("database.pickle", "wb")
f.write(pickle.dumps(data))
f.close()
Any solution to this or ideas ?
TYVM :)

Append new facial encoding to a pickle file

I just want to add new facial encoding to the pickle file. I've tried the following the method but it's not working.
creating of the pickle file
import face_recognition
import pickle
all_face_encodings = {}
img1 = face_recognition.load_image_file("ex.jpg")
all_face_encodings["ex"] = face_recognition.face_encodings(img1)[0]
img2 = face_recognition.load_image_file("ex2.jpg")
all_face_encodings["ex2"] = face_recognition.face_encodings(img2)[0]
with open('dataset_faces.dat', 'wb') as f:
pickle.dump(all_face_encodings, f)
Appending new data to the pickle file.
import pickle
img3 = face_recognition.load_image_file("ex3.jpg")
all_face_encodings["ex3"] = face_recognition.face_encodings(img3)[0]
with open('dataset_faces.dat', 'wb') as f:
pickle.dump(img3,f)
pickle.dump(all_face_encodings["ex3"],f)
But It's not working. Is there a way to append it?
I guess your steps should be:
Load old pickle-data to memory to <all_face_encodings> dictionary;
Add a new encodings to the dictionary.
Dump the whole dictionary to pickle file again.

recommended way to download images in python requests

I see that there are two ways to download images using python-reuqests.
Uisng PIL as stated in docs (https://requests.readthedocs.io/en/master/user/quickstart/#binary-response-content):
from PIL import Image
from io import BytesIO
i = Image.open(BytesIO(r.content))
using streamed response content:
r = requests.get(url, stream=True)
with open(image_name, 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)
Which is the recommended wya to download images however? both have its merits I suyppose, and I was wondering what is the optimal approach.
I love the minimalist way. There is nothing called right way. It depends on the task you want to perform and the constraints you have.
import requests
with open('file.png', 'wb') as f:
f.write(requests.get(url).content)
# if you change png to jpg, there will be no error
I did use the below lines of code in a function to save images.
# import the required libraries from Python
import pathlib,urllib.request,os,uuid
# URL of the image you want to download
image_url = "https://example.com/image.png"
# Using the uuid generate new and unique names for your images
filename = str(uuid.uuid4())
# Strip the image extension from it's original name
file_ext = pathlib.Path(image_url).suffix
# Join the new image name to the extension
picture_filename = filename + file_ext
# Using pathlib, specify where the image is to be saved
downloads_path = str(pathlib.Path.home() / "Downloads")
# Form a full image path by joining the path to the
# images' new name
picture_path = os.path.join(downloads_path, picture_filename)
# Using "urlretrieve()" from urllib.request save the image
urllib.request.urlretrieve(image_url, picture_path)
# urlretrieve() takes in 2 arguments
# 1. The URL of the image to be downloaded
# 2. The image new name after download. By default, the image is
# saved inside your current working directory

Categories

Resources