pythreejs, passing array of vectors into a shader - python

I need to pass an array of vectors as a uniform to a shader. As far as I understood code should be as follows:
fragment_shader = """
uniform int myVectorSize;
uniform vec4 myVector[50];
void main() {
gl_FragColor = ... // using myVector
}
"""
But I did not find a way to pass this vector to the shader:
material = ShaderMaterial(
uniforms=dict(
myVectorSize=(dict(value=10),
myVector=???,
**UniformsLib['common']
),
fragmentShader=fragment_shader,
)
Is there a way to do this?

Related

Multiple mat4 by vec4 and then make it vec3 in Python

Please help me. How can I multiply mat4 by mat4 and then make it vec3 in Python? Here is an example of what I need in C++:
glm::vec3 var = glm::vec3((mat4(...)* glm::vec4(0, 0, 0, 1)));
As well as a failed attempt on Python:
var=pyrr.Vector3( pyrr.matrix44.create_from_translation(pyrr.Vector3([20,0,5])) * pyrr.Vector4([0,0,0,1]))
Use the # operator.
var=(pyrr.matrix44.create_from_translation(pyrr.Vector3([20,0,5])).T # pyrr.Vector4([0,0,0,1])).xyz
The # operator is described in PEP-465

Detect motion blur of a cropped face with python i.e. opencv

I'm detecting faces with haarcascade and tracking them with a webcam using OpenCV. I need to save each face that is tracked. But the problem is when people are moving. In which case the face becomes blurry.
I've tried to mitigate this problem with opencv's dnn face detector and Laplacian with the following code:
blob = cv2.dnn.blobFromImage(cropped_face, 1.0, (300, 300), (104.0, 177.0, 123.0))
net.setInput(blob)
detections = net.forward()
confidence = detections[0, 0, 0, 2]
blur = cv2.Laplacian(cropped_face, cv2.CV_64F).var()
if confidence >= confidence_threshold and blur >= blur_threshold:
cv2.imwrite('less_blurry_image', cropped_face)
Here I tried to limit saving a face if it is not blurry due to motion by setting blur_threshold to 500 and confidence_threshold to 0.98 (i.e. 98%).
But the problem is if I change the camera I have to change the thresholds again manually. And in most of the cases setting a threshold omits most of the faces.
Plus, it is difficult to detect since the background is always clear compared to the blurred face.
So my question is how can I detect this motion blur on a face. I know I can train an ML model for motion blur detection of a face. But that would require heavy processing resources for a small task.
Moreover, I will be needing a huge amount of annotated data for training if I go that route. Which is not easy for a student like me.
Hence, I am trying to detect this with OpenCV which will be a lot less resource intensive compared to using an ML model for detection.
Is there any less resource intensive solution for this?
You can probably use a Fourier Transform (FFT) or a Discrete Cosine Transform (DCT) to figure out how blurred your faces are. Blur in images leads to high frequencies disappearing, and only low frequencies remaining.
So you'd take an image of your face, zero-pad it to a size that'll work well for FFT or DCT, and look how much spectral power you have at higher frequencies.
You probably don't need FFT - DCT will be enough. The advantage of DCT is that it produces a real-valued result (no imaginary part). Performance-wise, FFT and DCT are really fast for sizes that are powers of 2, as well as for sizes that have only factors 2, 3 and 5 in them (although if you also have 3's and 5's it'll be a bit slower).
As mentioned by #PlinyTheElder, DCT information can give you motion blur. I am attaching the code snippet from repo below:
The code is in C and i am not sure if there is python binding for libjpeg. Else you need to create one.
/* Fast blur detection using JPEG DCT coefficients
*
* Based on "Blur Determination in the Compressed Domain Using DCT
* Information" by Xavier Marichal, Wei-Ying Ma, and Hong-Jiang Zhang.
*
* Tweak MIN_DCT_VALUE and MAX_HISTOGRAM_VALUE to adjust
* effectiveness. I reduced these values from those given in the
* paper because I find the original to be less effective on large
* JPEGs.
*
* Copyright 2010 Julian Squires <julian#cipht.net>
*/
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <jpeglib.h>
static int min_dct_value = 1; /* -d= */
static float max_histogram_value = 0.005; /* -h= */
static float weights[] = { /* diagonal weighting */
8,7,6,5,4,3,2,1,
1,8,7,6,5,4,3,2,
2,1,8,7,6,5,4,3,
3,2,1,8,7,6,5,4,
4,3,2,1,8,7,6,5,
5,4,3,2,1,8,7,6,
6,5,4,3,2,1,8,7,
7,6,5,4,3,2,1,8
};
static float total_weight = 344;
static inline void update_histogram(JCOEF *block, int *histogram)
{
for(int k = 0; k < DCTSIZE2; k++, block++)
if(abs(*block) > min_dct_value) histogram[k]++;
}
static float compute_blur(int *histogram)
{
float blur = 0.0;
for(int k = 0; k < DCTSIZE2; k++)
if(histogram[k] < max_histogram_value*histogram[0])
blur += weights[k];
blur /= total_weight;
return blur;
}
static int operate_on_image(char *path)
{
struct jpeg_error_mgr jerr;
struct jpeg_decompress_struct cinfo;
jvirt_barray_ptr *coeffp;
JBLOCKARRAY cs;
FILE *in;
int histogram[DCTSIZE2] = {0};
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
if((in = fopen(path, "rb")) == NULL) {
fprintf(stderr, "%s: Couldn't open.\n", path);
jpeg_destroy_decompress(&cinfo);
return 0;
}
jpeg_stdio_src(&cinfo, in);
jpeg_read_header(&cinfo, TRUE);
// XXX might be a little faster if we ask for grayscale
coeffp = jpeg_read_coefficients(&cinfo);
/* Note: only looking at the luma; assuming it's the first component. */
for(int i = 0; i < cinfo.comp_info[0].height_in_blocks; i++) {
cs = cinfo.mem->access_virt_barray((j_common_ptr)&cinfo, coeffp[0], i, 1, FALSE);
for(int j = 0; j < cinfo.comp_info[0].width_in_blocks; j++)
update_histogram(cs[0][j], histogram);
}
printf("%f\n", compute_blur(histogram));
// output metadata XXX should be in IPTC etc
// XXX also need to destroy coeffp?
jpeg_destroy_decompress(&cinfo);
return 0;
}
int main(int argc, char **argv)
{
int status, i;
for(status = 0, i = 1; i < argc; i++) {
if(argv[i][0] == '-') {
if(argv[i][1] == 'd')
sscanf(argv[i], "-d=%d", &min_dct_value);
else if(argv[i][1] == 'h')
sscanf(argv[i], "-h=%f", &max_histogram_value);
continue;
}
status |= operate_on_image(argv[i]);
}
return status;
}
Compile the code:
gcc -std=c99 blur_detection.c -l jpeg -o blur-detection
Run the code:
./blur-detection <image path>

Qt3D: Scale entity size according to a distance between entity and camera

It is easy to resize entity in a code:
self.transform = Qt3DCore.QTransform()
self.transform.setScale(1.5)
But I want to resize entity dynamically. I want that my entity enlarge when I move camera away from it or shrinks when I approach my camera. Is it possible to do this using proper shaders?
I found this link.
where is a code which I have added added to my vertex shader:
in vec3 vertexPosition;
uniform mat4 modelViewProjection;
void main()
{
float reciprScaleOnscreen = 0.005;
float w = (modelViewProjection * vec4(0.0, 0.0, 0.0, 1.0)).w;
w *= reciprScaleOnscreen;
gl_Position = modelViewProjection * vec4(vertexPosition.xyz * w , 1.0);
}
So there is no need to scale entities in a program. It is simpler to use a shader.

Converting from UTM to LongLat using Proj4 in C++

I've been going around this issue for days, but haven't been able to find an explanation to what I am doing wrong. I hope you can lend me a hand.
I have a set of UTM coordinates (epsg:23030) that I want to convert to LongLat Coordinates (epsg:4326) by using the proj4 library for C++ (libproj-dev). My code is as follows:
#include "proj_api.h
#include <geos/geom/Coordinate.h>
geos::geom::Coordinate utm2longlat(double x, double y){
// Initialize LONGLAT projection with epsg:4326
if ( !( pj_longlat = pj_init_plus("+init=epsg:4326" ) ) ){
qDebug() << "pj_init_plus error: longlat";
}
// Initialize UTM projection with epsg:23030
if ( ! (pj_utm = pj_init_plus("+init=epsg:23030" ) ) ){
qDebug() << "pj_init_plus error: utm";
}
// Transform UTM projection into LONGLAT projection
int p = pj_transform( pj_utm, pj_longlat, 1, 1, &x, &y, NULL );
// Check for errors
qDebug() << "Error message" << pj_strerrno( p ) ;
// Return values as coordinate
return geos::geom::Coordinate(x, y)
}
My call to the function utm2longlat:
...
// UTM coordinates
double x = 585363.1;
double y = 4796767.1;
geos::geom::Coordinate coord = utm2longlat( x, y );
qDebug() << coord.x << coord.y;
/* Result is -0.0340087 0.756025 <-- WRONG */
In my example:
I know that UTM coordinates (585363.1 4796767.1) refer to LongLat coordinates (-1.94725 43.3189).
However, when called, the function returns a set of wrong coordinates: (-0.0340087 0.756025 ).
I was wondering if I had any misconfiguration when initializing the projections, so I decided to test the Proj4 Python bindings (pyproj), just to test whether I got the same wrong coordinates... and curiously, I got the good ones.
from pyproj import Proj, transform
// Initialize UTM projection
proj_utm = Proj(init='epsg:23030')
// Initialize LongLat projection
proj_lonlat = Proj(init='epsg:4326')
x_utm, y_utm = 585363.1, 4796767.1
x_longlat, y_longlat = transform(proj_utm, proj_lonlat, x_utm, y_utm)
// Print results
print "original", x_utm, y_utm
print "utm2lonlat", x_longlat, y_longlat
/* Result is -1.94725 43.3189 <-- CORRECT */
From what I understand pyproj is a set of Cython bindings over the Proj4 library, so I am using the same core in both programming languages.
Do you have any clue as to what could be wrong? Am I missing some type of conversion in the C++ function?
Thanks in advance.
The result seems to be correct to me, but it's returned in radians instead of degrees. Convert the result to degrees and check again.

CGAL-Bindings (Python): Mesh from point cloud

Im trying to do a finite element analysis for the Aorta of a pacient and i have a cloud of 3d points :
XYZ = array([[ 23.4929, 126.6431, 78.2083],
[ 23.505 , 123.2618, 76.1998],
[ 23.52 , 124.356 , 80.4145],
...,
[116.4752, 136.5604, 79.988 ],
[ 107.8206, 136.9329, 73.7108],
[ 154.0807, 91.6834, 91.9668]])
Visualization:
![Aorta][1]
[1]: http://i.stack.imgur.com/U25dM.png
For that i need to get the following arrays:
elem = {tetrahedrons made of 4 indexes}
faces = {triangules made of 3 indexes}
But my question is:
How can i get, using CGAL-bindings (for Python), the 3d triangulation of the point cloud and then get the tetrahedrons and triangles for the Mesh?
I already tried but the resulting 3d triangulation has indexes out of the len(XYZ) making no sense (result max index = 12000 vs len(XYZ) = 1949).
If someone that already used CGAL-bindings for python can help me with how to use it for this purpose or help me to comprehend the C++ code from CGAL Mesh Generation Examples, please :(.
http://doc.cgal.org/latest/Mesh_3/index.html#Chapter_3D_Mesh_Generation
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Mesh_triangulation_3.h>
#include <CGAL/Mesh_complex_3_in_triangulation_3.h>
#include <CGAL/Mesh_criteria_3.h>
#include <CGAL/Labeled_image_mesh_domain_3.h>
#include <CGAL/make_mesh_3.h>
#include <CGAL/Image_3.h>
// Domain
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Labeled_image_mesh_domain_3<CGAL::Image_3,K> Mesh_domain;
// Triangulation
typedef CGAL::Mesh_triangulation_3<Mesh_domain>::type Tr;
typedef CGAL::Mesh_complex_3_in_triangulation_3<Tr> C3t3;
// Criteria
typedef CGAL::Mesh_criteria_3<Tr> Mesh_criteria;
// To avoid verbose function and named parameters call
using namespace CGAL::parameters;
int main()
{
// Loads image
CGAL::Image_3 image;
image.read("data/liver.inr.gz");
// Domain
Mesh_domain domain(image);
// Mesh criteria
Mesh_criteria criteria(facet_angle=30, facet_size=6, facet_distance=4,
cell_radius_edge_ratio=3, cell_size=8);
// Meshing
C3t3 c3t3 = CGAL::make_mesh_3<C3t3>(domain, criteria);
// Output
std::ofstream medit_file("out.mesh");
c3t3.output_to_medit(medit_file);
return 0;
}

Categories

Resources