[Retracted] Face Detection and Recognition Algorithm in Digital Image Based on Computer Vision Sensor (2024)

Abstract

With the continuous innovation of network technology, various kinds of convenient network technologies have grown, and human dependence on network technology has gradually increased, which has resulted in the importance of network information security issues. With the continuous development of my country’s industrialization, the application of sensors is becoming more and more extensive, for example, the security vulnerabilities and defects in the operating system itself. Traditional sensors can “perceive” a certain thing or signal, convert it into an electrical signal and record it, and then use a conversion circuit to output the electrical signal into a value or other display form that is conducive to observation. Nowadays, sensors have been further developed. Based on the original “perception” function, combined with computer technology, it integrates data storage, data processing, data communication, and other functions, so that it has analysis functions and can better display information. The technical level has reached a new level. Early intelligent recognition mainly used the uniqueness of finger and palm lines to scan and contrast, but due to some weather reasons or skin texture constraints caused by skin texture, these methods showed certain limitations. This paper proposes a new computer vision-based algorithm from face detection technology and face recognition technology. In the face detection technology, it is mainly introduced from the OpenCV method. Face recognition technology is improved in practical applications through the Seetaface method and YouTu method. At the same time, using the contrast experiment, the detection and recognition rates under the three different requirements of side face detection, occlusion detection, and facial exaggerated expression are compared, and the accuracy of each method is improved. The results show that each case is compared in each case. The advantages and disadvantages of the algorithm effectively verify the effectiveness of the method.

1. Introduction

With the continuous improvement of science and technology, face detection and recognition are applied in more and more fields, such as the verification of identity by each application face scanning, the monitoring system of the bank self-service cash machine, the face unlocking of the mobile phone, and the new face-brushing technology of Alipay. All need to pass the detection and recognition technology for the face. Under the prospect of the gradual diversification of the technology, face detection and recognition have become a technology closely related to our lives. Face detection and recognition technology not only make life easier and faster but also add a touch of technology fun. Through the face of a series of operations such as unlocking the phone, paying for the face, and intelligently identifying, using high-tech technology to ensure the security of our property and identity and to realize the combination of technology and life, it is a vital part of our lives. Sensors can be combined with many technologies to form smart sensors. Vision measurement technology has been developed into a new type of industrial testing technology, and its application scope is also expanding. Early vision measurement will be limited by the software and hardware resources of image sensors and image processing systems and is expensive, has low performance indicators, and has relatively high failure rates. The processing efficiency is not high.

The research of face detection has important research value due to the variability of facial expression, skin color, and illumination. Yong and Yanru [1] studied the face detection based on skin color features and found that the difference in skin color is obvious under different illumination levels. In order to solve this problem, it uses YCbCr and HIS two skin color space lighting as the technical basis. The skin color model is used to distinguish the skin color area, which can effectively reduce the impact of care on skin color, which is more convenient for the face detection and positioning through the SNoW classifier. The comparison experiment results show that the method has high robustness during the experiment, the detection speed of the experiment is fast, and there are excellent face detection and positioning results. Guangsheng and Huarong [2] combined the deep learning technology with the convolutional neural network technology in order to control the occurrence of abnormal events in the monitoring system, so as to acquire the characteristics of the image and use the circulating neural system to effectively process the sequence and obtain the position and size of the detection window, creating a viable monitoring system that provides real-time warning of anomalies. Experimental results show that the system has strong face detection performance. Luhong et al. [3] used the face template method to effectively locate the face edge in order to break through the face detection which can only identify the binding under the condition of no background and pass the two eyes and different aspect ratios. The face template in the case is used for face detection, and finally, the face detection success rate of the experiment is more than 90%, which brings an effective solution for detecting the face in a single positive and positive environment. Yuangen et al. [4] reported that in order to solve the problem of detection complexity caused by face complexity, side face, rotation, expression, occlusion, etc., a new improved version of the AdaBoost algorithm is proposed to achieve comprehensive skin color preproduction. The function of processing and using geometric features for filtering, and the highly robust nature of the Haar algorithm, also extracts complex scenes. According to experiments, the method has excellent detection rate and latitude for side face and rotation. It has a good implementation effect on the embedded platform. Suwen and Yinwei [5] reported that in order to solve the traditional convolutional neural network for face detection (the detection speed and weight are too random due to the excessive number of sliding windows, and the training time is too long due to random initialization) and network convergence slow problem, they decided to use a new convolutional neural network method based on grouping layered strategy window to combine selective search and the Gabor optimization to extract features and determine the generalization of network structure. In the benefits of enhanced and face detection accuracy and speed of experimentation. Chengji et al. [6] designed the influence of the experimental differences caused by the complexity of the face in the real scene and the confidence of the background frame, and the difference in the accuracy of the feature combination method in the wild. A multilayer feature fusion method effectively improves the detection accuracy between adjacent faces. Sheping et al. [7] proposed the LBP method to enhance the influence of face detection and reduce the conversion of nonlinear data to linear structure [8]. The extracted feature vector and SVM algorithm are used for classification processing. It is found through experiments that this method can effectively avoid the complex effects of illumination caused by uneven illumination and is very effective for face detection and recognition experiments.

Yong [9] explores the effect of face recognition in uncontrolled lighting environment based on the influence of different illumination effects. The MSR algorithm can directly extract the illumination invariants of objects, and GF and INPS algorithms can indirectly extract illumination. The algorithm characteristics of invariants are tested. From the perspective of feature level fusion and classifier decision-level fusion, the linear discriminant analysis of illumination robustness is used to design a method to overcome illumination invariant bands in face recognition. In the complexity of the impact, according to Chenkai et al. [10], based on the application of face recognition technology, from the history of traditional face recognition algorithms to the face recognition research under the deep learning method, the application of deep learning and DCNN algorithm is analyzed. Finally, the prospects for the future development of face recognition technology are discussed, and the face recognition technology is fully discussed. According to Qianyu et al. [11], influenced by the environment at that time, deep learning has a certain degree of success in motion recognition and target detection. The research on the influence of face recognition on illumination changes and attitude changes is urgent, so I try to propose a kind of a new and effective face recognition method that is expected to build a deep network based on the foundation of deep learning and use the database of the face to preprocess the computational complexity, so that the new deep network can effectively carry out the application of the extracted features [12]. The Softmax regression model is used to judge the face category, and the experimental results are excellent. The validity of the method is verified. According to Zhouyu et al. [13], in order to improve the optimization of the recognition model and accelerate the feature extraction of face images, a particle swarm optimization algorithm is introduced based on the traditional PCA technology to optimize the SVM model and the function model. By reducing the training and recognition time of the SVM, the face features can be extracted efficiently, so that the classifier can identify the test data [14]. The final experimental results show that the PSO-optimized SVM model has better performance, better generalization ability, higher accuracy of parameter value and recognition, and excellent effect on improving face recognition efficiency and improving recognition. Feng et al. [15] conducted a more in-depth exploration of the face recognition in the field of compressed sensing theory and studied the problem of solving the norm optimization problem in the classification algorithm based on sparse representation. The image representation algorithm of first-order information and second-order information shows that the proposed algorithm is significantly better in low-dimensional recognition rate, and the algorithm improves the accuracy of classification and reduces the difference of face recognition. Mengxi et al. [16] proposed a kind of nonlinear deformation caused by changes in illumination, posture, expression, age, etc. in face recognition, using pop-learning, and the Laplacian feature mapping face recognition algorithm based on the two-dimensional kernel principal component analysis method [17]. The feature maps the face recognition algorithm. The experiment compares other algorithms, which has the advantages of high recognition rate and low computational complexity. Huixian et al. [18] improved the recognition performance caused by the one-sample face recognition problem, explored the extraction of invariant features, and improved the local texture feature of the image accurately and quickly by using the local binary pattern. The degree information uses the gradient information to improve feature extraction and effectively combines the two information extraction feature methods to achieve the ability to enhance the recognition of facial features. Experiments show that the BGCSBP algorithm has a high recognition rate, effectively reduces the recognition time, and has good applicability.

The paper first explores the face detection and recognition algorithm [19]. The face detection technology is analyzed by the OpenCV method. Then, the face recognition technology is explored from the Seetaface method and the YouTu method [20, 21]. Finally, the data experiment is used to analyze the three aspects of the face, the face occlusion, and the face with exaggerated expression [223]. The effect of face detection and recognition under different conditions is compared with the accuracy of face detection and recognition in different situations according to the three methods [24]. It is found that in the face detection part, there is exaggerated expression detection. Using the Seetaface method, the side face detection module and the occlusion detection module use the YouTu method; in the face recognition part, the face recognition module uses the YouTu method to maximize the detection rate and recognition rate and to reduce the false detection rate [25].

2. Methods

2.1. OpenCV Method

To perform the face recognition function, face detection is first performed to determine the position of the face in the picture. The OpenCV method is a common method in face detection. It firstly extracts the feature images into a large sample set by extracting the face Haar features in the image and then uses the AdaBoost algorithm as the face detector. In face detection, the algorithm can effectively adapt to complex environments such as insufficient illumination and background blur, which greatly improves the accuracy of detection. For a set of training sets, different training sets are obtained for subsequent work by changing the distribution probabilities of each of the samples, and each training set is trained to obtain a weak classifier, and then these several classifiers are weighted. For example, each sample is distributed with a training class, and a new training set is obtained by changing the distribution probability according to the correctness of the training set classification. The higher the classification accuracy rate, the lower the distribution probability. The new training set is trained to get the classifier, and it is repeated, and several classifiers are obtained, so that the weight of each classifier is increased by the classification accuracy.

Figure 1 shows the structure of the face detection model. The extraction of Haar’s rectangular features and the strong classifier based on AdaBoost are an important part of face detection. The Haar feature is composed of several identical rectangles, which are distinguished by the black and white difference of colors, and the feature values of the Haar features are defined by the pixel values of the rectangle.

Figure 1

Human face detection model structure.

Regarding the number of rectangular features, Papageorgiou et al. proposed a formula:where and represent the width and height of the rectangular features, respectively.

It can be seen from the above formula that the number of features is huge, and the OpenCV method solves this problem by introducing an integral image method. Let be an arbitrary image and be the integral image of the image. Then, the value of any pixel is defined as

Calculated by the following formula, it iswhere is the cumulative value for each row, the initial value of , and is the initial value of .

2.2. Seetaface Method

Seetaface is a new convolutional neural network structure. Face detection module Seetaface detection, feature point location module Seetaface alignment, and feature extraction and comparison module Seetaface recognition are mainly used for face detection and recognition. Firstly, the face part of the image is segmented to remove the background part. The feature points of the face are recognized and extracted to obtain the feature map, which is expressed in algebraic form and compared with the correlation to determine whether it is the same person. Feature extraction is shown in Figure 2.

Figure 2

Feature extraction.

The ROC curve of Seetaface detection on the FDDB database is shown in Figure 2.

Face detection is the top of the FuSt cascade structure consisting of several fast LAB cascade classifiers. These fast LAB cascade classifiers are mainly for face images of different poses appearing during face detection, and modules. The middle is composed of a plurality of multilayer perceptrons based on SURF features, and the same multilayer perceptual machine structure at the end of the module is responsible for processing gesture images of various faces. Among them, the feature of Seetaface face detection feature is that the upper funnel state is wide and narrow. This level of classifier makes the adopted features change gradually from top to bottom, so as to ensure that the background area is removed to the greatest extent and only the face area is retained. Effective step selection makes the next step more effective. Computer face recognition is shown in Figure 4.

In the feature point location module, the CFAN structure of the cascaded multistage stack-type self-encoder network is used to locate the feature points of the image detected by the Seetaface face. The specific steps are shown in Figure 3. It mainly adopts the classic five-point positioning method, that is, five points mark from the eyes, nose, and mouth, which ensures the accuracy of face detection recognition to the greatest extent. In the processing of the first-stage self-encoding network, a low-resolution face region is used for fast estimation, and the face shape is positioned as s0. Since the resolution of the picture at this stage is too low, the shape of the face shape is roughly contoured. The next step is to increase the resolution, so that the clarity of the detected pictures is continuously improved. After each step of the self-encoding network, is processed step by step, and the facial features of the face are continuously optimized more and more carefully.

Figure 3

ROC curve.

Figure 4

Computer face recognition.

Figure 5

Multilevel stack self-encoder network.

Figure 6

Five-position positioning analysis table.

Figure 7

Face recognition contrast similarity.

Figure 9

Face detection accuracy when the Seetaface angle shifts.

Figure 10

Face detection map with three different offsets.

Figure 11

YouTu face image offset angle accuracy.

Figure 12

OpenCV face image offset angle accuracy rate.

Figure 13

Seetaface method for face occlusion face detection.

Figure 14

YouTu method for face occlusion face detection.

Figure 15

OpenCV method for face occlusion face detection.

Figure 16

Seetaface method for face detection in different expressions.

Figure 17

YouTu method for face detection in different expressions.

Figure 18

The OpenCV method detects different expressions of faces in different expressions.

According to Figure 5, the feature extraction and comparison module are mainly based on the convolutional neural network model, and the two images are similarly completed to complete the recognition step. The similarity of the same face image is more than 70%, and the similarity of different objects is less than 30%, so that it meets the needs of daily face recognition.

2.3. YouTu Method

As an emerging face recognition algorithm, the YouTu method adopts the classical boosting algorithm in analysis and facial features, and the face recognition confirmation part is completed by combining deep learning methods. Set as a collection; then

Among them, is the correlation coefficient of each member, and both and can be learned through the boosting process.

Normally, if the input source image belongs to the input space and the output result belongs to the output space , then there is a function that obeys the probability distribution :

At this point, can predict the unknown , and each input category label is given by function . We call this a hard classifier. However, in the process of use, we are more inclined to use the soft classifier. In the soft classifier, each input category label is given by function ; then

Among them, represents the loss function, and risk is called as the error promotion rate. Because of the distribution that is not known, cannot be directly reduced. Therefore, an import rule is needed to reduce the generalized error rate. Then, there are

According to the law of large numbers, when , .

In daily use, the amount of training data is not very large, and the actual error rate will not be large. The boosting algorithm guarantees that the complexity is not high in some cases, so there is no overadaptation.

The posterior probability of the hidden topic can be estimated according to the existing parameters.

Maximize the complete likelihood logarithm function according to the estimated hidden topic .

We introduce the parameters and use the Lagrangian coefficient method to solve the maximum complete likelihood logarithm function:

For a two-class classification problem, define the training sample set:

It is necessary to introduce a slack variable to convert the solution of the optimal hyperplane problem into a quadratic optimization problem, that is, to solve the following constraints:

Use the Lagrange coefficient method to convert the final quadratic optimization problem into a dual problem, namely

Each element in is a Lagrangian coefficient.

In order to evaluate whether the fitting effect of the parameter is optimal, a loss function is needed to express it, which is generally used in the model.

Through this kernel matrix, the training samples can be mapped from the original input space to high-dimensional in the feature space .

In the face detection process, the YouTu method increases the number of positioning points to 90 points, which is scattered throughout the contours of the facial features, greatly increasing the accuracy of positioning. As shown in Figure 6, the 90 points used for facial features include the contours of the eyebrows, eyes, nose, mouth, and face, where the eyebrows are symmetrical at 8 points each; the left and right sides of the eyes are 9 points. There are 13 positioning points, 22 positioning points in the lips, and 21 positioning points in the face contour.

Among them, the YouTu method uses the knowledge model in face recognition to perform feature processing and calculate its feature similarity. The key step is to compare the known face image with several images in the face database, analyze the related images with high similarity, and clearly show the similarity ratio, as shown in Figure 7, namely the 1:N face search. If the photo to be retrieved contains a plurality of faces, the search result corresponding to each of the detected faces is returned. Face retrieval is applied to scenes where the user does not need to declare identity, and the identity of each person in the group is determined by performing face retrieval in the identity photo library.

This technology has high recognition rate, can search and adjust according to different scenes, and can automatically derive the face evolution model to overcome the bone differences caused by age differences. In addition, the technology also has antiocclusion technology, which effectively reduces the impact of obstacles on face recognition, and has been applied to the public security tracking system to monitor the face position more conveniently and quickly through face recognition.

The signal-to-noise ratio (SNR) is the most important performance indicator of the ADC. The signal-to-noise ratio includes factors such as linearity, distortion, impulse, and noise, according to the quantization accuracy of the ADC.

Harmonic distortion ratio (THD) is the ratio of the power of all harmonic distortion to the power of the fundamental wave in a certain frequency band.

The signal-to-noise and distortion ratio can be obtained by the fast Fourier transform (FFT) analysis .

Call the MATLAB function and substitute the output into the script to calculate the harmonic distortion ratio, according to the formula :

2.4. Image Sensor

In the past, the visual system was more complicated and expensive, usually from US dollars to more than US dollars. Generally, multiple cameras were required to complete detailed automatic detection. And due to its complexity, specialized vision experts were often required to design, integrate, and install the system. These factors naturally limit it to certain large companies, but it is obviously inappropriate for small- and medium-sized companies that require a detection system. In contrast, the visual sensor is much simpler, compact, and easier to install and operate, making it more suitable for the needs of general enterprises. The image sensor performance comparison is shown in Table 1.

Table 1

The image sensor performance comparison.

The image sensor can be divided into area array type and linear array type according to the working mode. The area array image sensor uses a pixel array arranged in a two-dimensional area array to photograph objects to obtain two-dimensional image information. The area array pixel image sensor passes a complete image that can be obtained with a single exposure. Most digital cameras, mobile phone cameras, and surveillance cameras use this structure. The linear image sensor uses a pixel array arranged in a one-dimensional linear array to obtain two-dimensional image information by scanning and photographing objects. A row of pixels can be obtained at the completion of each exposure. To get an image, you need multiple rows of pixels. It is widely used in machine vision monitoring, aerial photography, spatial imaging, and medical imaging.

Under the control of the timing sequence, the switch cooperates to complete the addition and subtraction of the next level signal, the sampling phase ends, and the output is completed.

are a capacitor. The coordinates of the measurement data obtained by the sensor are based on the coordinate system , and the equation of the projection curve can be expressed as

Then, the conversion relationship between coordinate systems and is as follows:where is the perspective transformation matrix, expressed as

3. Experiments

This paper analyzes the accuracy of face detection and recognition by analyzing OpenCV, Seetaface, and YouTu. In this experiment, the selected face images are randomly representative, and there is no specific case. The main work of this thesis is as follows: (1) Through the in-depth study of OpenCV, Seetaface, and YouTu, based on the convolutional neural network model, the Haar feature extraction method is used to extract the features, and the face detection of the three algorithms is performed, identify and explain. (2) For three different specific scenes, that is, the angle of the face is shifted; the face is occluded, and the eyes, nose, and mouth are, respectively, occluded; the face has an exaggerated expression, including surprise, anger, and three expressions of crying, according to research on these three different scenarios. (3) And under each specific situation, use three algorithms to conduct experiments, and explore the experimental results of each algorithm to compare which algorithm is applicable in this case.

The whole process of the test is similar to Figure 8. According to different aspects of face detection and recognition, it is divided into three cases for discussion. The experimental results show that this method can better control the shielding angle and shielding position. Table 2 shows the registration function description.

Table 2

The register function description.

4. Results and Discussion

4.1. Face Detection and Recognition in Side Cases

Select three sets of experimental data, sharpen the image, extract feature points, process the characteristic information of the concave groove on the collected image, calculate the width of the concave groove, and verify the image acquisition accuracy of the sensor. The research data is shown in Table 3.

Table 3

The research data.

Perform data processing on the data in the above table to find the key feature points, as shown in Table 4.

Table 4

Performed data processing.

Face detection and recognition are more closely applied to life, and the actual situation of the situation encountered is more the deterioration of the face angle, that is, the side situation. Figure 9 shows the face detection accuracy of the Seetaface method when the face is angularly offset.

It can be clearly seen from the Figure 9 that as the degree of face angle shift increases, the accuracy of face detection and recognition gradually decreases. When the angular deviation is within 20°, the angular offset decreases greatly, and the recognition accuracy is above 30%, which can detect and identify the face, but there is a case where the error is too large. When the side angle shift is too large and reaches 40 degrees or more, the accuracy of face detection and recognition is only 20% or less, and it is basically in an undetectable state.

The measured object is tilted, or the incident laser light is not perpendicular to the surface during hand-held measurement (the relationship between the light bar and the gap is still vertical), as shown in Table 5.

Table 5

The measured object is tilted.

Image processing is also performed on the above-mentioned collected images, and the verification test results are shown in Table 6.

Table 6

The verification test results.

The number of weak classifiers, usefulness, and detection rate and false detection rate of weak classifiers generated by the combination of each layer of the algorithm in this research are shown in Table 7.

Table 7

The combination of each layer of the algorithm in this research.

The face detection and recognition of the three lateral offsets in the three different gender and age test conditions are shown in Figure 10 using the YouTu method for face detection.

It can be seen that the YouTu method can accurately locate the facial features of the face even if there is a slight offset and a slight angular offset in dealing with certain face recognition and detection problems, and there is no basic deviation in its positioning. At the same time, when the side is deflected by 90 degrees, it can be clearly seen that the method can accurately perform the five-position positioning for the five senses at the identifiable place, and even for the five senses of the unidentified part, the position judgment can be made according to the situation, and the judgment is made. The location result is more reasonable.

It can be clearly seen from Figure 11 that within the angle offset of 60°, the recognition accuracy of the face image is slightly higher, and the face can be more accurately identified and detected.

The algorithm accuracy comparison is shown in Table 8.

Table 8

The algorithm accuracy comparison.

The test results of different features are shown in Table 9.

Table 9

The test results of different features.

The YouTu method performs face detection on a face with a certain offset angle. The accuracy rate is shown in Figure 12. According to the chart, it can be clearly seen that when the offset angle is small, the accuracy of the detection is extremely high, but when the offset is large, the detection result is inaccurate, and as the offset angle increases, the detection accuracy is improved. Gradually lowering, face detection is not possible. The comparison on the cascade classifier is shown in Table 10.

Table 10

Comparison on cascade classifiers.

4.2. Face Detection and Recognition under Face Occlusion

In addition, in face detection, the situation that the face is obstructed by the obstruction obscuring the true appearance of the detector exists in the market, so there is great research significance for face detection and recognition in the case of face occlusion.

As can be seen from Figure 13, when the detector’s eyes are occluded, face detection is completely impossible. When the nose and mouth are blocked, the five-point positioning can be clearly performed, and the portrait and background can be divided more accurately.

The different sample libraries and test libraries of ORL, AR, Yale-B, and CAS-PEAL-R1 are, respectively, cropped to pixels. In addition, the posture of the training library is manually corrected, and the average value is selected for multiple tests. Experiment 1 compares the recognition rates of Algorithms 1, 2, and 3 to verify the importance of multichannel weighted representation. The experimental comparison is shown in Table 11.

Table 11

The experimental comparison.

It is proved that it is more effective to extract the features on the salient face area. The experimental comparison is shown in Table 12.

Table 12

Extract the features on the salient face area.

As shown in Figure 14, no matter whether the eyes, nose, or mouth are blocked, the YouTu method can accurately locate 90 feature points for the detection of facial contours. This method is highly effective for face detection.

As can be seen from Figure 15, the face can be effectively separated from the background only when the nose is blocked, and face recognition cannot be performed when the eyes and mouth are blocked.

4.3. Face Detection and Recognition under Exaggerated Facial Expressions

Compare the recognition rate of the methods in this chapter, as shown in Table 13.

Table 13

Compare the recognition rate of the methods in this chapter.

Test the sensitivity of the algorithm in this research to noise. Usually, the picture may contain various noises. After adding several common noises to the test picture, compare the method in this chapter with the traditional overall Gabor characterization. The recognition rate is shown in Table 14.

Table 14

Test the sensitivity of the algorithm.

The facial expression changes are extremely diverse. Each person’s expression has the uniqueness and uniqueness of his own posture. This complex expression is extremely difficult in face detection and recognition. It is extremely important to find a more effective way to locate various expression changes.

It can be concluded from Figure 16 that the Seetaface method can effectively and accurately distinguish the face from the background and accurately perform the facial features in any expression state of surprise, anger, and crying.

As seen in Figure 17, the YouTu method can also perform face contour segmentation in three different expressions, but it can accurately segment the face and background. However, from a precise point of view, the facial features of the method are slightly deviated, resulting in a slight accuracy decline.

As can be seen from Figure 18, OpenCV can perform face recognition and detection for different expressions, but this method can only be used for face recognition, and accurate five-position positioning cannot be performed in face positioning.

5. Conclusion

In the side case, by comparing the accuracy of the three face detection methods of Seetaface, YouTu, and OpenCV in the occurrence of angular offset, it can be clearly found that the Seetaface method can be effective when the angular offset is small. Face segmentation and detection are performed, but in the case where the offset angle is slightly larger, the accuracy is gradually reduced as the angle increases, and face detection cannot be performed even when a serious offset occurs. The YouTu method can perform more accurate face recognition detection when the angle of the face is large, and even if a serious offset occurs, it is more accurate for the unidentified part of the facial features. However, the detection result of the OpenCV method has a large deviation, and the angular offset is slightly larger, resulting in an undetectable result. Ordinary photoelectric sensors have only a single light sensor element. In many applications, multiple such sensors are often required to detect the various characteristics of the components, and the vision sensor can capture an image containing millions of pixels, so as to be able to condition the inspection components shown in detail, which can prevent missed inspections and improve the accuracy of the inspection, which is especially necessary for the inspection of electronic components.

The image digital pixel sensor is a sensitive element, and the highly integrated chip makes the system avoid the use of multiple drive chips and improves the stability of the sensor. In the case of facial occlusion, the YouTu method is obviously more accurate, no matter which part is occluded, it can effectively locate the contour of the face, and the position prediction for the occlusion part is more accurate. The effect of the OpenCV method is the lowest, the face can be detected only when the nose is occluded, and face detection cannot be performed once any of the eyes and mouth are occluded. The Seetaface method also detects that the effect is blocked during occlusion. Face detection is not possible when the eyes are blocked. Face detection can be performed slightly when the nose and mouth are blocked, but the detection accuracy is not high.

The information collected by the image sensor greatly simplifies the bandwidth of the filter in the subsequent image signal processor and enhances the signal-to-noise ratio, ensuring the quality of imaging within the available dynamic range. From the analysis of facial expressions with exaggerated expressions, Seetaface is the most effective method. This method can effectively and accurately perform face recognition and localization and is equally accurate in the facial features. The YouTu method can also perform segmentation background and facial features in face detection and recognition, but its status has certain errors, which is less accurate than Seetaface. The OpenCV method cannot perform five-point positioning in the detection, and functionally simple, and can only perform background segmentation.

Data Availability

This article does not cover data research. No data were used to support this study.

Conflicts of Interest

There are no potential competing interests in our paper. And all authors have seen the manuscript and approved the submission. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

Copyright © 2021 Di Lu and Limin Yan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

[Retracted] Face Detection and Recognition Algorithm in Digital Image Based on Computer Vision Sensor (2024)
Top Articles
Latest Posts
Article information

Author: Terrell Hackett

Last Updated:

Views: 5884

Rating: 4.1 / 5 (72 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Terrell Hackett

Birthday: 1992-03-17

Address: Suite 453 459 Gibson Squares, East Adriane, AK 71925-5692

Phone: +21811810803470

Job: Chief Representative

Hobby: Board games, Rock climbing, Ghost hunting, Origami, Kabaddi, Mushroom hunting, Gaming

Introduction: My name is Terrell Hackett, I am a gleaming, brainy, courageous, helpful, healthy, cooperative, graceful person who loves writing and wants to share my knowledge and understanding with you.