running python script by php give null result on macOS - python

so I'm trying to execute a python file version 3.9 file using php via VScode and safari browser I spent hours trying to figure out what is the problem because it keep giving me null result
here is my code on php file
<?php
$result = shell_exec("/usr/bin/python3 fr/fr_load.py");
var_dump($result);
?>
I tried other python file with only print command and they worked just fine so my connection is fine but I can't figure out what is the problem
my python code is
#!/usr/bin/python3
#import the required libraries
import cv2
import joblib
from sys import exit
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
# function to detect face from image
def face_detection(image_to_detect):
#converting the image to grayscale since its required for eigen and fisher faces
image_to_detect_gray = cv2.cvtColor(image_to_detect, cv2.COLOR_BGR2GRAY)
# load the pretrained model for face detection
# haarcascade is recommended for Eigenface
face_detection_classifier = cv2.CascadeClassifier('php/fr/models/haarcascade_frontalface_default.xml')
# detect all face locations in the image using classifier
all_face_locations = face_detection_classifier.detectMultiScale(image_to_detect_gray)
# if no faces are detected
if (len(all_face_locations) == 0):
return None, None
#splitting the tuple to get four face positions
x,y,width,height = all_face_locations[0]
#[0] cuz we assume we have only one face on our data set
#calculating face coordinates
face_coordinates = image_to_detect_gray[y:y+width, x:x+height]
#for traning and testting all images should be at same size (for egien)
face_coordinates = cv2.resize(face_coordinates,(500,500)) #we can use any number fit for us
#return the face detected and face location
return face_coordinates, all_face_locations[0] #all_face_locations[0] beacuse there is only one face in each image
names =[]
names.append("steve jobs")
names.append("ali alramadan")
names.append("hillary clinton")
#########load recognition model for later use ###########
face_classifier = cv2.face.EigenFaceRecognizer_create()
face_classifier.read("php/fr/models/our model.yml")
######## prediction ##############
#path of the image we want to test and predict
image_to_classify = cv2.imread("php/fr/dataset/testing/load.jpg")
#make a copy of the image cuz we dont want to ruin the actual image by writing on it
image_to_classify_copy = image_to_classify.copy()
#get the face from the image
face_coordinates_classify, box_locations = face_detection(image_to_classify_copy)
#if no faces are returned
if face_coordinates_classify is None:
print("There are no faces in the image to classify")
exit()
#if face is returned then predict the face
name_index, distance = face_classifier.predict(face_coordinates_classify) #if distance is big then there is big different between the train image and test image
name = names[name_index]
distance = abs(distance)
print(name)
#draw bounding box and text for the face detected
(x,y,w,h) = box_locations
cv2.rectangle(image_to_classify,(x,y),(x+w, y+h),(0,255,0),2)
cv2.putText(image_to_classify,name,(x,y-5),cv2.FONT_HERSHEY_PLAIN,2.5,(0,255,0),2)
#show the image in window
cv2.imshow("Prediction "+name, cv2.resize(image_to_classify, (500,500)))
cv2.waitKey(0)
cv2.destroyAllWindows()
any suggestions please?

Related

Mediapipe Display Body Landmarks Only

I have installed Mediapipe (0.9.0.1) using Python (3.7.0) on windows 11.
I have been able to successfully get Mediapipe to generate landmarks (for face and body); for an image, video, and webcam stream.
I would like to now get Mediapipe to only draw body specific landmarks (i.e. exclude facial landmarks).
I understand that I may use OpenCV (or Czone) to accomplish this goal, however, I am looking to achieve my objective using Mediapipe (i.e. using the draw_landmarks function in the MediaPipe library).
The specific bit of code I am trying (but with errors) is the following:
#Initialize a list to store the detected landmarks.
landmarks = []
# Iterate over the Mediapipe detected landmarks.
for landmark in results.pose_landmarks.landmark:
# Append the Mediapipe landmark into the list.
landmarks.append((int(landmark.x * width), int(landmark.y * height),
(landmark.z * width)))
#create index list for specific landmarks
body_landmark_indices = [11,12,13,14,15,16,23,24,25,26,27,28,29,30,31,32]
landmark_list_body = []
#Create a list which only has the required landmarks
for index in body_landmark_indices:
landmark_list_body.append(landmarks[index - 1])
mp_drawing.draw_landmarks(
image=output_image,
landmark_list=landmark_list_body.pose_landmarks,
connections=mp_pose.POSE_CONNECTIONS,
landmark_drawing_spec=landmark_drawing_spec,
connection_drawing_spec=connection_drawing_spec)`
Executing the above I get the error `'list' object has no attribute 'pose_landmarks'
I have replaced landmark_list=landmark_list_body.pose_landmarks, with landmark_list=landmark_list_body but with errors.
I am now very tiered and out of ideas. Is there a capeless hero out there?
Thanks.
You can try the following approach:
import cv2
import mediapipe as mp
import numpy as np
from mediapipe.python.solutions.pose import PoseLandmark
from mediapipe.python.solutions.drawing_utils import DrawingSpec
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_pose = mp.solutions.pose
custom_style = mp_drawing_styles.get_default_pose_landmarks_style()
custom_connections = list(mp_pose.POSE_CONNECTIONS)
# list of landmarks to exclude from the drawing
excluded_landmarks = [
PoseLandmark.LEFT_EYE,
PoseLandmark.RIGHT_EYE,
PoseLandmark.LEFT_EYE_INNER,
PoseLandmark.RIGHT_EYE_INNER,
PoseLandmark.LEFT_EAR,
PoseLandmark.RIGHT_EAR,
PoseLandmark.LEFT_EYE_OUTER,
PoseLandmark.RIGHT_EYE_OUTER,
PoseLandmark.NOSE,
PoseLandmark.MOUTH_LEFT,
PoseLandmark.MOUTH_RIGHT ]
for landmark in excluded_landmarks:
# we change the way the excluded landmarks are drawn
custom_style[landmark] = DrawingSpec(color=(255,255,0), thickness=None)
# we remove all connections which contain these landmarks
custom_connections = [connection_tuple for connection_tuple in custom_connections
if landmark.value not in connection_tuple]
IMAGE_FILES = ["test.jpg"]
BG_COLOR = (192, 192, 192)
with mp_pose.Pose(
static_image_mode=True,
model_complexity=2,
enable_segmentation=True,
min_detection_confidence=0.5) as pose:
for idx, file in enumerate(IMAGE_FILES):
image = cv2.imread(file)
image_height, image_width, _ = image.shape
results = pose.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
annotated_image = image.copy()
mp_drawing.draw_landmarks(
annotated_image,
results.pose_landmarks,
connections = custom_connections, # passing the modified connections list
landmark_drawing_spec=custom_style) # and drawing style
cv2.imshow('landmarks', annotated_image)
cv2.waitKey(0)
It modifies the DrawingSpec and POSE_CONNECTIONS to "hide" a subset of landmarks.
However, due to the way the draw_landmarks() function is implemented in Mediapipe, it is also required to add a condition in drawing_utils.py (located in site-packages/mediapipe/python/solutions):
if drawing_spec.thickness == None: continue
Add it before the Line 190 (# White circle border). The result should look like this:
...
drawing_spec = landmark_drawing_spec[idx] if isinstance(
landmark_drawing_spec, Mapping) else landmark_drawing_spec
if drawing_spec.thickness == None: continue
# White circle border
circle_border_radius = max(drawing_spec.circle_radius + 1,
int(drawing_spec.circle_radius * 1.2))
...
This change is required in order to completely eliminate the white border that is drawn around landmarks regardless of their drawing specification.
Hope it helps.

Python Code containing OpenCV is not detecting object in my image

I am very new to OpenCV and Python. So, for my first project to recognize objects, I am using a small test file in python. This is the test file
import cv2
from matplotlib import pyplot as plt
# Opening image
img = cv2.imread("image.jpg")
# OpenCV opens images as BRG
# but we want it as RGB We'll
# also need a grayscale version
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Use minSize because for not
# bothering with extra-small
# dots that would look like STOP signs
stop_data = cv2.CascadeClassifier('cascade.xml')
found = stop_data.detectMultiScale(img_gray,
minSize=(20,20))
amount_found = len(found)
if amount_found != 0:
# There may be more than one
# sign in the image
for (x, y, width, height) in found:
# We draw a green rectangle around
# every recognized sign
cv2.rectangle(img_rgb, (x, y),
(x + height, y + width),
(0, 255, 0), 5)
# Creates the environment of
# the picture and shows it
plt.subplot(1, 1, 1)
plt.imshow(img_rgb)
plt.show()
The tutorial that I saw told me to use a premade cascade.xml file. However, I wanted to train my cascade file to recognize a simple apple logo, which I cropped from a photo. I copy pasted the same image multiple times into one folder titled p. For the negative images, I used a folder titled n. After this, I used the GUI cascade trainer and trained a cascade file. However, when I use the same image (uncropped) in the python program, there is no output.
This is the image which I used to train:
This is the original image which I put in the python script
The original image is of size 422612 and the cropped image is of size 285354. I had set the size in the trainer to height=20 and width=15.
These are my settings:
Folder p:
Output of program:
Cascade GUI log end
NEG count : acceptanceRatio 0 : 0
Required leaf false alarm rate achieved. Branch training terminated.
Could someone tell me where I am going wrong ? Please ask if I have to provide anymore information. Is there any ratio relation between the positive image and the actual image that I am missing on ?
UPDATE: I added some more negative images and am now getting this output:

CVLIB - how to add blurred subface to the original image?

Friends, I need to implement a code, which blur faces from given images (I am not a dev so this is really difficult to me). I found out that I can use OpenCV and cvlib to do that and found a sample code (repository from cvlib) which does part of the job.
I understood that I need to get the subfaces and apply the face blur to them and I could do it but now I don't know how to add the blurred face to original image. Could someone help me with that?
import cvlib as cv
import sys
from cv2 import cv2
import os
# read input image
image = cv2.imread('path')
# apply face detection
faces, confidences = cv.detect_face(image)
print(faces)
print(confidences)
# loop through detected faces
for face,conf in zip(faces,confidences):
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
subFace = image[startY:endY,startX:endX]
subFace = cv2.GaussianBlur(subFace,(23, 23), 30)
# display output
# press any key to close window
cv2.imshow("face_detection", image)
cv2.waitKey()
cv2.imshow("face_detection", subFace)
# release resources
cv2.destroyAllWindows()
I finally figured out how to do it:
import cvlib as cv
import sys
from cv2 import cv2
import os
# read input image
image = cv2.imread('path')
# apply face detection
faces, confidences = cv.detect_face(image)
# print the array with the coordinates and the confidence
print(faces)
print(confidences)
# loop through detected faces
for face,conf in zip(faces,confidences):
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
# get the subface
subFace = image[startY:endY,startX:endX]
# apply gaussian blur over subfaces
subFace = cv2.GaussianBlur(subFace,(23, 23), 30)
# add the subfaces to de original image
image[startY:startY+subFace.shape[0], startX:startX+subFace.shape[1]] = subFace
cv2.imshow("face_detection", image)
cv2.waitKey()
# save output
cv2.imwrite("face_detection.jpg", image)
# release resources
cv2.destroyAllWindows()

Resizing non uniform images with precise face location

I work at a studio that does school photos and we are trying to make a script to eliminate the job of cropping each photo to a template. The photos we work with are fairly uniform but they vary in resolution and head position a bit. I took up the mantle of trying to write the script with my fairly limited Python knowledge and through a lot of trial and error and online resources I think I have got most of the way there.
At the moment I am trying to figure out the best way to have the image crop from the NumPy array with the head where I want and I just cant find a good flexible solution. The head needs to be positioned slightly differently for pose 1 and pose 2 so its needs to be easy to change on the fly (Probably going to implement some sort of simple GUI to input stuff like that, but for now I can just change the code).
I also need to be able to change the output resolution of the photo so they are all uniform (2000x2500). Anyone have any ideas?
At the moment this is my current code, it just saves the detected face square:
import cv2
import os.path
import glob
# Cascade path
cascPath = 'haarcascade_frontalface_default.xml'
# Create the haar cascade
faceCascade = cv2.CascadeClassifier(cascPath)
#Check for output folder and create if its not there
if not os.path.exists('output'):
os.makedirs('output')
# Read Images
images = glob.glob('*.jpg')
for c, i in enumerate(images):
image = cv2.imread(i, 1)
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find face(s) using cascade
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1, # size of groups
minNeighbors=5, # How many groups around are detected as face for it to be valid
minSize=(500, 500) # Min size in pixels for face
)
# Outputs number of faces found in image
print('Found {0} faces!'.format(len(faces)))
# Places a rectangle on face
for (x, y, w, h) in faces:
imgCrop = image[y:y+h,x:x+w]
if len(faces) > 0:
#Saves Images to output folder with OG name
cv2.imwrite('output/'+ i, imgCrop)
I can crop using it like this:
# Crop Padding
left = 300
right = 300
top = 400
bottom = 1000
for (x, y, w, h) in faces:
imgCrop = image[y-top:y+h+bottom, x-left:x+w+right]
but that outputs pretty random resolutions and changes based on the image resolution
TL;DR
To set a new resolution with the dimension, you can use cv2.resize. There may be a pixel loss so you can use the interpolation method.
The newly resized image may be in BGR format, so you may need to convert to RGB format.
cv2.resize(src=crop, dsize=(2000, 2500), interpolation=cv2.INTER_LANCZOS4)
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB) # Make sure the cropped image is in RGB format
cv2.imwrite("image-1.png", crop)
Suggestion:
One approach is using python's face-recognition library.
The approach is using two sample images for training.
Predict the next image based on training images.
For instance, The followings are the training images:
We want to predict the faces in the below image:
When we get the facial encodings of the training images and apply to the next image:
import face_recognition
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
# Load a sample picture and learn how to recognize it.
first_image = face_recognition.load_image_file("images/ex.jpg")
first_face_encoding = face_recognition.face_encodings(first_image)[0]
# Load a second sample picture and learn how to recognize it.
second_image = face_recognition.load_image_file("images/index.jpg")
sec_face_encoding = face_recognition.face_encodings(second_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
first_face_encoding,
sec_face_encoding
]
print('Learned encoding for', len(known_face_encodings), 'images.')
# Load an image with an unknown face
unknown_image = face_recognition.load_image_file("images/babes.jpg")
# Find all the faces and face encodings in the unknown image
face_locations = face_recognition.face_locations(unknown_image)
face_encodings = face_recognition.face_encodings(unknown_image, face_locations)
# Convert the image to a PIL-format image so that we can draw on top of it with the Pillow library
# See http://pillow.readthedocs.io/ for more about PIL/Pillow
pil_image = Image.fromarray(unknown_image)
# Create a Pillow ImageDraw Draw instance to draw with
draw = ImageDraw.Draw(pil_image)
# Loop through each face found in the unknown image
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255), width=5)
# Remove the drawing library from memory as per the Pillow docs
del draw
# Display the resulting image
plt.imshow(pil_image)
plt.show()
The output will be:
The above is my suggestion. When you create a new resolution with the current image, there will be a pixel loss. Therefore you need to use an interpolation method.
For instance: after finding the face locations, select the coordinates in the original image.
# Add after draw.rectangle function.
crop = unknown_image[top:bottom, left:right]
Set new resolution with the size 2000 x 2500 and interpolation with CV2.INTERN_LANCZOS4.
Possible Question: Why CV2.INTERN_LANCZOS4?
Of course, you can select whatever you like, but in this post CV2.INTERN_LANCZOS4 was suggested.
cv2.resize(src=crop, dsize=(2000, 2500), interpolation=cv2.INTER_LANCZOS4)
Save the image
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB) # Make sure the cropped image is in RGB format
cv2.imwrite("image-1.png", crop)
Outputs are around 4.3 MB Therefore I can't display in here.
From the final result, we clearly see and identify faces. The library precisely finds the faces in the image.
Here what you can do:
Either you can use the training images of your own-set, or you can use the example above.
Apply the face-recognition function for each image, using the trained face-locations and save the results in the directory.
here is how I got it to crop how I wanted, this is added right below the "output number of faces" function
#Get the face postion and output values into variables, might not be needed but I did it
for (x, y, w, h) in faces:
xdis = x
ydis = y
w = w
h = h
#Get scale value by dividing wanted head hight by detected head hight
ws = 600/w
hs = 600/h
#scale image to get head to right size, uses bilinear interpolation by default
scale = cv2.resize(image,(0,0),fx=hs,fy=ws)
#calculate head postion for given values
sxdis = int(xdis*ws) #applying scale to x distance and turning it into a integer
sydis = int(ydis*hs) #applying scale to y distance and turning it into a integer
sycent = sydis+300 #adding half head hight to get center
ystart = sycent-700 #subtract where you want the head center to be in pixels, this is for the vertical
yend = ystart+2500 #Add whatever you want vertical resolution to be
xcent = sxdis+300 #adding half head hight to get center
xstart = xcent-1000 #subtract where you want the head center to be in pixels, this is for the horizontal
xend = xstart+2000 #add whatever you want the horizontal resolution to be
#Crop the image
cropped = scale[ystart:yend, xstart:xend]
Its a mess but it works exactly how I wanted it to work.
ended up going with openCV instead of switching to python-Recognition because of speed but I might switch over if I can get multithreading to work in python-recognition.

X.shape[1] size doesn't fit the expected value

I'm currently working on my final degree project in robotics, and I decided to create an open-source robot capable of replicating human emotions. The robot is all set up and ready to receive orders, but I'm still busy coding it. I'm currently basing my code off this method. The idea is to extract 68 facial landmarks from
a low FPS video feed (using RPi Camera V2), feed those landmarks to a trained SVM classifier and have it return a numeral from 0-6 depending on the expression it detected (Angry, Disgust, Fear, Happy, Sad, Surprise and Neutral). I'm testing out the capabilities of my model with some pictures I took using the RPi Camera, and this is what I've managed to put together so far in terms of code:
# import the necessary packages
from imutils import face_utils
import dlib
import cv2
import numpy as np
import time
import argparse
import os
import sys
if sys.version_info >= (3, 0):
import _pickle as cPickle
else:
import cPickle
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from data_loader import load_data
from parameters import DATASET, TRAINING, HYPERPARAMS
def get_landmarks(image, rects):
if len(rects) > 1:
raise BaseException("TooManyFaces")
if len(rects) == 0:
raise BaseException("NoFaces")
return np.matrix([[p.x, p.y] for p in predictor(image, rects[0]).parts()])
# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
print("Initializing variables...")
p = "shape_predictor_68_face_landmarks.dat"
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(p)
# path to pretrained model
path = "saved_model.bin"
# load pretrained model
print("Loading model...")
model = cPickle.load(open(path, 'rb'))
# initialize final image height & width
height = 48
width = 48
# initialize landmarks variable as empty array
landmarks = []
# load the input image and convert it to grayscale
print("Loading image...")
gray = cv2.imread("foo.jpg")
# detect faces in the grayscale image
print("Detecting faces in loaded image...")
rects = detector(gray, 0)
# loop over the face detections
print("Looping over detections...")
for (i, rect) in enumerate(rects):
# determine the facial landmarks for the face region, then
# convert the facial landmark (x, y)-coordinates to a NumPy
# array
shape = predictor(gray, rect)
shape = face_utils.shape_to_np(shape)
# loop over the (x, y)-coordinates for the facial landmarks
# and draw them on the image
for (x, y) in shape:
cv2.circle(gray, (x, y), 2, (0, 255, 0), -1)
# show the output image with the face detections + facial landmarks
print("Storing saved image...")
cv2.imwrite("output.jpg", gray)
print("Image stored as /'output.jpg/'")
# arrange landmarks in array
print("Collecting and arranging landmarks...")
# scipy.misc.imsave('temp.jpg', image)
# image2 = cv2.imread('temp.jpg')
face_rects = [dlib.rectangle(left=1, top=1, right=47, bottom=47)]
landmarks = get_landmarks(gray, face_rects)
# load data
print("Loading collected data into predictor...")
print("Extracted landmarks: ", landmarks)
landmarks = np.array(landmarks.flatten())
# predict expression
print("Making prediction")
predicted = model.predict(landmarks)
However, after running the code everything seems to be fine up until this point:
Making prediction
Traceback (most recent call last):
File "face.py", line 97, in <module>
predicted = model.predict(landmarks)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 576, in predict
y = super(BaseSVC, self).predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 325, in predict
X = self._validate_for_predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 478, in _validate_for_predict
(n_features, self.shape_fit_[1]))
ValueError: X.shape[1] = 136 should be equal to 2728, the number of features at training time
I searched for similar issues on this website, but being such a specific purpose I didn't quite find what I needed. I've been working on the design and research for quite some time, but finding all the snippets needed to make the code work has taken the most time out of me, and I'd love to polish this concept as soon as possible since the presentation date is approaching quickly. Any and all contributions are greatly welcomed!
Here's the trained model I'm currently using, by the way.
I am probably being silly, but it looks like you define path after you use it to load your model.
Also path seems like a very bad name for a variable containing a file location, perhaps modelFileLocation is less likely to already be defined.
Solved it! Turns out my model was trained using a combination of HOG features and Dlib landmarks, however I was only feeding the landmarks to the predictor, which resulted in the size discrepancy.

Categories

Resources