profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/mattaylor/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Mat Taylor mattaylor Proteus Bay Area

mattaylor/elvis 28

truthy, ternary, elvis and conditional assignment and conditional access operators for nim

mattaylor/cloud9 1

Cloud9 IDE - by javascripter for javascripters - Powered by Ajax.org

mattaylor/garcon 1

SproutCore build tools using node.js

mattaylor/jasap 1

Dojo based micro MVC framework for rapid scaffolding of json-schema based apps using extensible view pattern

mattaylor/json-schema 1

JSON Schema specifications, reference schemas, and a CommonJS implementation

mattaylor/jsref 1

Fast and flexible, lightweight (<1kb min) JSON-REF resolver with support for json pointer, remote and custom URI dreferencing

mattaylor/node-twilio 1

A Twilio helper library for node

issue openedTadasBaltrusaitis/OpenFace

Unresolved reference to 'dgesvd_'

On a Nvidia Jetson Nano I have compiled OpenBLAS dev branch from source.

Everything until the compilation of OpenFace itself is pretty smooth sailing. I've followed the manual installation, and done as proposed in the ARM thread. When running make on OpenFace itself, it fails on "Linking CXX executable ../../bin/FaceLandmarkVid. I'm getting an undefined reference to dgesvd_ from dlib::scan_fhog_pyramid I've tried with dlib from source and libdlib-dev from apt.

[ 84%] Linking CXX executable ../../bin/FaceLandmarkImg CMakeFiles/FaceLandmarkImg.dir/FaceLandmarkImg.cpp.o: In functiondlib::scan_fhog_pyramid<dlib::pyramid_down<6u>, dlib::default_fhog_feature_extractor>::build_fhog_filterbank(dlib::matrix<double, 0l, 1l, dlib::memory_manager_stateless_kernel_1<char>, dlib::row_major_layout> const&) const': FaceLandmarkImg.cpp:(.text._ZNK4dlib17scan_fhog_pyramidINS_12pyramid_downILj6EEENS_30default_fhog_feature_extractorEE21build_fhog_filterbankERKNS_6matrixIdLl0ELl1ENS_33memory_manager_stateless_kernel_1IcEENS_16row_major_layoutEEE[_ZNK4dlib17scan_fhog_pyramidINS_12pyramid_downILj6EEENS_30default_fhog_feature_extractorEE21build_fhog_filterbankERKNS_6matrixIdLl0ELl1ENS_33memory_manager_stateless_kernel_1IcEENS_16row_major_layoutEEE]+0x58c): undefined reference to 'dgesvd_'

Any help is greatly appreciated.

created time in a day

issue commentTadasBaltrusaitis/OpenFace

Mac Os build problem

Same issue here and fixed by switching SDK via

sudo xcode-select -s /Library/Developer/CommandLineTools 
sergio-dl

comment created time in a day

issue openedTadasBaltrusaitis/OpenFace

just faclandmarks

Hello, can i use /bin/FaceLandmarkImg and just get the landmark. I always get the landmarks and the headposition

created time in 2 days

fork dom96/freenode-exodus

Projects and channels that have decided to leave Freenode. (Leave count as of 2021-06-19: 902)

fork in 4 days

issue openedTadasBaltrusaitis/OpenFace

Error with finding package configuration file

When I run cmake -D CMAKE_BUILD_TYPE=RELEASE . . I get a message saying: CMake Error at CMakeLists.txt:16 (find_package): By not providing "FindOpenBLAS.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "OpenBLAS", but CMake did not find one.

Could not find a package configuration file provided by "OpenBLAS" with any of the following names:

OpenBLASConfig.cmake
openblas-config.cmake

Add the installation prefix of "OpenBLAS" to CMAKE_PREFIX_PATH or set "OpenBLAS_DIR" to a directory containing one of the above files. If "OpenBLAS" provides a separate development package or SDK, be sure it has been installed.

How do I resolve this? Thanks!

created time in 5 days

issue commentTadasBaltrusaitis/OpenFace

inconsistent outputs in regards to number of frames

Any way you could share the video?

One way to circumvent this issue is by converting the video into a collection of images using a tool such as ffmpeg, and then running OpenFace on the images as a sequence, this should force it to use all of them as input and the rows in output should correspond to the frames.

My suspicion is that OpenCV may have skipped some frames.

farshidrayhan-uom

comment created time in 6 days

issue commentTadasBaltrusaitis/OpenFace

core dumped after Starting tracking

The .txt file should contain all the required information, can you have a look at the contents of that .txt file? The execution seems to have completed successfully as it reported post-processing of action units and closing of the various streams.

LouisYZK

comment created time in 6 days

issue commentTadasBaltrusaitis/OpenFace

HoG feature dimensionality

Indeed, before applying SVM or SVR models the HOG feature is dimensionality reduced. However, this reduction is "folded into" the SVM computation, where we perform a single multiplication instead of two, so the dimensionality reduced PCA is never actually computed online and there is no easy way to expose it through the tool

rubser

comment created time in 6 days

issue commentTadasBaltrusaitis/OpenFace

Mapping between eye-gaze vectors and screen coordinates

No. there's no functionality to do that. See some ideas of how that could be implemented - https://github.com/TadasBaltrusaitis/OpenFace/issues/577

However, you would only get quite limited accuracy of mapping gaze from OpenFace to screen locations

BogdanBalcau

comment created time in 6 days

issue openedTadasBaltrusaitis/OpenFace

Mapping between eye-gaze vectors and screen coordinates

Is there any way to map the 3d gaze tracking data to specific points on screen?

created time in 7 days

issue commentTadasBaltrusaitis/OpenFace

core dumped after Starting tracking

After experiencing a similar situation, my best guess is that it is due to the length of the video, but can the maintainers (@TadasBaltrusaitis) please check?

Did you have any solutions? if the season is the length of the video.

LouisYZK

comment created time in 12 days

issue commentTadasBaltrusaitis/OpenFace

core dumped after Starting tracking

After experiencing a similar situation, my best guess is that it is due to the length of the video, but can the maintainers (@TadasBaltrusaitis) please check?

LouisYZK

comment created time in 13 days

issue openedadamhaile/surplus

reconcileArrays fails on parent.insertBefore

I am using Surplus.content directly to manipulate the content of an element not created or controlled by the Surplus compiler. But the same error can be reproduced with JSX syntax. The following snippet reproduces the error.

import * as Surplus from "surplus";

const a = ["foo ", span("["), "bar", span("]")];
const b = ["foo ", span("{"), "bar", span("}")];
let state = []; 
const body = document.createElement("div");
state = Surplus.content(body, a, state);
state = Surplus.content(body, b, state);

// Failed to execute 'insertBefore' on 'Node': 
// The node before which the new node is to be 
// inserted is not a child of this node.



// utility
function span(s: string) {
  const span = document.createElement("span");
  span.innerText = s;
  return span;
}

The exception is thrown from https://github.com/adamhaile/surplus/blob/master/src/runtime/content.ts#L302. It appears that inserting the right-most <span>}</span> is skipped, due to being marked as NOINSERT. Then, when going to insert the Text node "bar", it attempts to insert relative to the <span> which was not inserted.

I'm not sure why the right-most <span> is marked as NOINSERT by this line in the text node reuse algorithm. Is it failing to mark the "bar" node and mis-marking the <span> node?

created time in 15 days

issue commentTadasBaltrusaitis/OpenFace

TBB can be replaced with cv::parallel_for_

For simplicity, you can straightly replace codes in Patch_experts.cpp.

	parallel_for_(cv::Range(0, n), [&](const cv::Range& range){
		for(int i = range.start; i < range.end; i++)
		{
	
			if(visibilities[scale][view_id].rows == n)
			{
				if(visibilities[scale][view_id].at<int>(i,0) != 0)
				{
	
					// Work out how big the area of interest has to be to get a response of window size
					int area_of_interest_width;
					int area_of_interest_height;
	
					if(use_ccnf)
					{
						area_of_interest_width = window_size + ccnf_expert_intensity[scale][view_id][i].width - 1;
						area_of_interest_height = window_size + ccnf_expert_intensity[scale][view_id][i].height - 1;
					}
					else
					{
						area_of_interest_width = window_size + svr_expert_intensity[scale][view_id][i].width - 1;
						area_of_interest_height = window_size + svr_expert_intensity[scale][view_id][i].height - 1;
					}
	
					// scale and rotate to mean shape to reference frame
					cv::Mat sim = (cv::Mat_<float>(2,3) << a1, -b1, landmark_locations.at<double>(i,0), b1, a1, landmark_locations.at<double>(i+n,0));
	
					// Extract the region of interest around the current landmark location
					cv::Mat_<float> area_of_interest(area_of_interest_height, area_of_interest_width);
	
					// Using C style openCV as it does what we need
					CvMat area_of_interest_o = area_of_interest;
					CvMat sim_o = sim;
					IplImage im_o = grayscale_image;
					cvGetQuadrangleSubPix(&im_o, &area_of_interest_o, &sim_o);
	
					// get the correct size response window
					patch_expert_responses[i] = cv::Mat_<float>(window_size, window_size);
	
					// Get intensity response either from the SVR or CCNF patch experts (prefer CCNF)
					if(!ccnf_expert_intensity.empty())
					{
	
						ccnf_expert_intensity[scale][view_id][i].Response(area_of_interest, patch_expert_responses[i]);
					}
					else
					{
						svr_expert_intensity[scale][view_id][i].Response(area_of_interest, patch_expert_responses[i]);
					}
	
					// if we have a corresponding depth patch and it is visible
					if(!svr_expert_depth.empty() && !depth_image.empty() && visibilities[scale][view_id].at<int>(i,0))
					{
	
						cv::Mat_<float> dProb = patch_expert_responses[i].clone();
						cv::Mat_<float> depthWindow(area_of_interest_height, area_of_interest_width);
	
	
						CvMat dimg_o = depthWindow;
						cv::Mat maskWindow(area_of_interest_height, area_of_interest_width, CV_32F);
						CvMat mimg_o = maskWindow;
	
						IplImage d_o = depth_image;
						IplImage m_o = mask;
	
						cvGetQuadrangleSubPix(&d_o,&dimg_o,&sim_o);
	
						cvGetQuadrangleSubPix(&m_o,&mimg_o,&sim_o);
	
						depthWindow.setTo(0, maskWindow < 1);
	
						svr_expert_depth[scale][view_id][i].ResponseDepth(depthWindow, dProb);
	
						// Sum to one
						double sum = cv::sum(patch_expert_responses[i])[0];
	
						// To avoid division by 0 issues
						if(sum == 0)
						{
							sum = 1;
						}
	
						patch_expert_responses[i] /= sum;
	
						// Sum to one
						sum = cv::sum(dProb)[0];
						// To avoid division by 0 issues
						if(sum == 0)
						{
							sum = 1;
						}
	
						dProb /= sum;
	
						patch_expert_responses[i] = patch_expert_responses[i] + dProb;
	
					}
				}
			}
		}
	});

There are multi-threads write on the global variable such as "patch_expert_responses" in the scope of cv::parallel_for_ even though not the same place but the same variable,which is allowed ?

There is no true shared,what about the false shared or Ping Pang situation?

vinjn

comment created time in 17 days

issue commentTadasBaltrusaitis/OpenFace

TBB can be replaced with cv::parallel_for_

For simplicity, you can straightly replace codes in Patch_experts.cpp.

	parallel_for_(cv::Range(0, n), [&](const cv::Range& range){
		for(int i = range.start; i < range.end; i++)
		{
	
			if(visibilities[scale][view_id].rows == n)
			{
				if(visibilities[scale][view_id].at<int>(i,0) != 0)
				{
	
					// Work out how big the area of interest has to be to get a response of window size
					int area_of_interest_width;
					int area_of_interest_height;
	
					if(use_ccnf)
					{
						area_of_interest_width = window_size + ccnf_expert_intensity[scale][view_id][i].width - 1;
						area_of_interest_height = window_size + ccnf_expert_intensity[scale][view_id][i].height - 1;
					}
					else
					{
						area_of_interest_width = window_size + svr_expert_intensity[scale][view_id][i].width - 1;
						area_of_interest_height = window_size + svr_expert_intensity[scale][view_id][i].height - 1;
					}
	
					// scale and rotate to mean shape to reference frame
					cv::Mat sim = (cv::Mat_<float>(2,3) << a1, -b1, landmark_locations.at<double>(i,0), b1, a1, landmark_locations.at<double>(i+n,0));
	
					// Extract the region of interest around the current landmark location
					cv::Mat_<float> area_of_interest(area_of_interest_height, area_of_interest_width);
	
					// Using C style openCV as it does what we need
					CvMat area_of_interest_o = area_of_interest;
					CvMat sim_o = sim;
					IplImage im_o = grayscale_image;
					cvGetQuadrangleSubPix(&im_o, &area_of_interest_o, &sim_o);
	
					// get the correct size response window
					patch_expert_responses[i] = cv::Mat_<float>(window_size, window_size);
	
					// Get intensity response either from the SVR or CCNF patch experts (prefer CCNF)
					if(!ccnf_expert_intensity.empty())
					{
	
						ccnf_expert_intensity[scale][view_id][i].Response(area_of_interest, patch_expert_responses[i]);
					}
					else
					{
						svr_expert_intensity[scale][view_id][i].Response(area_of_interest, patch_expert_responses[i]);
					}
	
					// if we have a corresponding depth patch and it is visible
					if(!svr_expert_depth.empty() && !depth_image.empty() && visibilities[scale][view_id].at<int>(i,0))
					{
	
						cv::Mat_<float> dProb = patch_expert_responses[i].clone();
						cv::Mat_<float> depthWindow(area_of_interest_height, area_of_interest_width);
	
	
						CvMat dimg_o = depthWindow;
						cv::Mat maskWindow(area_of_interest_height, area_of_interest_width, CV_32F);
						CvMat mimg_o = maskWindow;
	
						IplImage d_o = depth_image;
						IplImage m_o = mask;
	
						cvGetQuadrangleSubPix(&d_o,&dimg_o,&sim_o);
	
						cvGetQuadrangleSubPix(&m_o,&mimg_o,&sim_o);
	
						depthWindow.setTo(0, maskWindow < 1);
	
						svr_expert_depth[scale][view_id][i].ResponseDepth(depthWindow, dProb);
	
						// Sum to one
						double sum = cv::sum(patch_expert_responses[i])[0];
	
						// To avoid division by 0 issues
						if(sum == 0)
						{
							sum = 1;
						}
	
						patch_expert_responses[i] /= sum;
	
						// Sum to one
						sum = cv::sum(dProb)[0];
						// To avoid division by 0 issues
						if(sum == 0)
						{
							sum = 1;
						}
	
						dProb /= sum;
	
						patch_expert_responses[i] = patch_expert_responses[i] + dProb;
	
					}
				}
			}
		}
	});

There are multi-threads write on the global variable such as "patch_expert_responses" in the scope of cv::parallel_for_ even though not the same place but the same variable,which is allowed ?

vinjn

comment created time in 17 days

issue commentTadasBaltrusaitis/OpenFace

TBB can be replaced with cv::parallel_for_

patch_expert_responses

There are multi-threads write on the global variable such as "patch_expert_responses" in the scope of cv::parallel_for_ even though not the same place but the same variabel,which is allowed ?

vinjn

comment created time in 17 days

startedmattaylor/rokuJs

started time in 18 days

issue commentTadasBaltrusaitis/OpenFace

Ubuntu 18.04.5 cmake target not found

Got the same problem. I solve this problem by adding find_package(Threads) to the CMakeLists.txt file in the root directory.

hguuuu

comment created time in 19 days

issue openedTadasBaltrusaitis/OpenFace

HoG feature dimensionality

Hi, I have extracted hog feature, from FeatureExtraction.exe, for a frame and the dimensionality is 4464. In library paper I read that, using a PCA model, the hog feature dimensionality is reduced to 1391. How can I extract the 1391 dimensionality feature?

Thanks!

created time in 24 days

issue commentTadasBaltrusaitis/OpenFace

Encounter problem about Threads::Threads while compiling

Thank you for providing cues! I will test it soon.

RaymondJiangkw

comment created time in 25 days

issue closedTadasBaltrusaitis/OpenFace

Repeating Frame Bug

Hi,

I'm having an issue with the FeatureExtraction tool and the output data that it generates - more specifically face isolation. The main issue is that the output data under the ".au_aligned" folder and the respective output video (constructed by FeatureExtraction tool), seem to have some repeating frame issue where every X frame is replaced by another frame.

Blank frame (subject is out of view) being replaced by frame where subject is in view. p1

Regular frame being replaced by irregular frame (frame bug highlighted in orange) p2

Video showing the issue demonstrated through the .avi video that FeatureExtraction creates. Link to video

Any help is much appreciated!

Desktop

OS: Ubuntu Version: 20.04

closed time in a month

Moktar13
IssuesEvent

issue closedTadasBaltrusaitis/OpenFace

Repeating Frame Bug

Hi,

I'm having an issue with the FeatureExtraction tool and the output data that it generates - more specifically face isolation. The main issue is that the output data under the ".au_aligned" folder and the respective output video (constructed by FeatureExtraction tool), seem to have some repeating frame issue where every X frame is replaced by another frame.

Blank frame (subject is out of view) being replaced by frame where subject is in view. p1

Regular frame being replaced by irregular frame (frame bug highlighted in orange) p2

Video showing the issue demonstrated through the .avi video that FeatureExtraction creates. Link to video

Any help is much appreciated!

Desktop

OS: Ubuntu Version: 20.04

closed time in a month

Moktar13

issue commentTadasBaltrusaitis/OpenFace

Repeating Frame Bug

I pre-pended 0's as you recommended and it seems to have fixed the issue! I will continue with more testing just to confirm this and then mark it as solved. Thanks so much for the help!

Moktar13

comment created time in a month

issue commentTadasBaltrusaitis/OpenFace

Encounter problem about Threads::Threads while compiling

I'm not sure if CUDA is causing an issue here, as OpenFace does not directly depend on CUDA. I wonder if it could be an issue with linking to TBB?

RaymondJiangkw

comment created time in a month

issue commentTadasBaltrusaitis/OpenFace

Repeating Frame Bug

In order for OpenFace to know which images to load first you will need to prepend 0's to your names. as currently it uses a sort function internally to sort names, leading to "wrong" sorting of input.

Moktar13

comment created time in a month

issue commentTadasBaltrusaitis/OpenFace

Repeating Frame Bug

The input images aren't pre-pended so they're named incrementally like 1, 2, ...,100, 101, and so on. The output images (face isolation) are pre-pended first with 000001, 000002, 000010, 000011, 000120, 001500, 01250, etc. So it looks like the output images are pre-pended with five 0's, but then it decreases as the image number goes up.

Moktar13

comment created time in a month

issue commentTadasBaltrusaitis/OpenFace

Repeating Frame Bug

Maybe it is some image sorting issue then. Are all images pre-pended with 000 or something similar? This actually feels like the code is loading images 0 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 etc.

Moktar13

comment created time in a month

issue commentTadasBaltrusaitis/OpenFace

Repeating Frame Bug

Oh wow, this looks super strange, and I haven't seen anything like this before. I wonder if the issue is with the encoded video. Any chance you could re-encode the input video (e.g. using ffmpeg)?

I don't use the input video directly with OpenFace, I break the input video up into a series of images (frames) and then use them with FeatureExtraction. So I am unsure if changing the encoding of the input video will do anything. Sorry, I should have mentioned this in the original post!

Moktar13

comment created time in a month

issue commentTadasBaltrusaitis/OpenFace

Output Quality (Gaze Direction Underestimation, Default Face Measures)

Great analysis, you are indeed right that accuracy in X is higher than that in Y. This is quite typical of gaze estimation systems. The reason for this is that there is just fewer pixels to work with on the iris for y axis estimation and they tend to be occluded by the upper and lower eyelids, further the dynamic range of eye gaze in Y is lower in general and the errors become more apparent. There's no easy solution to this problem unfortunatelly.

LinaJunc

comment created time in a month