Driver Drowsiness Recognition is used to find the fatigue or tiredness of a driver while driving a vehicle. We can prevent road or traffic accidents using this system.
The National Highway Safety Traffic Administration reports what number of individuals passed on because of drowsy drivers. Few reports from its show between 2013 and 2017, there was a sum of 4,111 fatalities that included drowsy driving. In 2017, 91,000 police-detailed accidents included lazy drivers. Those accidents prompted around 50,000 individuals to be injured. We can watch the viability of this sleepy driver’s driving. The result of the drowsiness of a driver will reduce vehicle control. So we should and ought to give a solution for this, using a drowsiness recognition system to reduce the death rate from accidents occurring because of the drowsiness of drivers.
The ideal approach to identify the drowsiness of drivers is by utilizing their eye state. The driver’s symptoms can be observed to decide the driver’s drowsiness sufficiently early to take preventive activities to keep away from an accident. AI-Based models will consider various attributes of the driver’s eye like longer blink duration, slow eyelid movement, smaller eye openings, and frequent nodding.
We can detect a driver’s drowsiness by counting how many times the driver will close their eyes within some time. Drowsiness people close their eyes more than normal people. We can build this system by using different types of algorithms like AdaBoost-based face detector, Active Shape Model (ASM), Principal Component Analysis, etc. we can alert drivers whenever they have fallen to fatigue using our trained model or system.
We can create a dataset for this problem by using different types of measures like physiological measures such as brain waves, heart rate, plus rate, etc. another one is ocular measures; this is for the person’s eye state. And also, we can take another measure based on vehicle balancing.
But all measures are not giving better performances except ocular measures. Before collecting the images dataset for drowsiness recognition, we have to capture different drivers’ moments while driving as a video by using a camera. After recording, we can split recorded videos into images. These images contain different angles and lighting effects or weather conditions.
In this problem statement, we have to annotate different kinds of scene conditions. Because if we build a drowsiness recognition system, we should consider other types of scenes. Does a driver wear glasses or not? If they wear glass, then what kind of glass, weather conditions, head condition, eye condition, etc.
The annotator has to annotate different scenes like
All these kinds of information will help the model to recognize the drowsiness of the driver while driving. How does a model learn or know this information from every scene? Here the annotation process will play an important role. Just imagine how important the annotation process is to the model or system. Drowsiness recognition systems should not recognize all these different types of features from all scenes without annotated images.
At the point when we record videos of drivers while driving, we can’t get high-resolution videos because continually changing lighting conditions cause dull shadows and brightening changes moreover. Every one of these impacts shows more debase of model execution of model while building a model.
So we need to apply a couple of preprocessing techniques to show signs of improvement or high-resolution images from original ones. We will also apply smoothing techniques to images to remove noise from images. But after performing enhancement and smoothing also we can’t get a better image from which we can extract exact features. We can’t get the facial contours of drivers perfectly from original images, even applying these two techniques. For this, we will apply the self-quotient technique to the original image.
In the wake of applying this procedure, we can get facial shapes of drivers that will improve exactness and the model’s speed.
Below is the example image after data preprocessing
AI-Based solution for Driver drowsiness recognition is a kind of object recognition problem in computer vision. For this model development, we use different types of object recognition algorithms or techniques.
Recognition of human emotion from various scenes is a fascinating exploration topic, the consequences of which can execute in facial expression recognition (FER) or emotion recognition.
Driver drowsiness recognition is an object recognition task, and from an approaching video transfer of a driver, recognize open and closed eyes. To accomplish the objective of building up an accurate, resource-efficient, and cost-effective drowsiness recognition system, we regarded drowsiness recognition as an object recognition task.
While building an AI-Based model for this project we will think about two phases.
Eye Movement Detection
For eye movement identification, we will utilize face indicators. It was well utilized to decide the position and scale of the face.
We additionally utilize another algorithm to locate human eyes from front-view images. We will extract features or highlights from the various human face and eye shapes by utilizing these strategies. Object detection algorithms are smarter to distinguish face alignment, yet the adjustments in the atmosphere and light all will impact this method’s results. Because of this reason, We will give input images as self-quotient images, which are consequences in the wake of the preprocessing step.
Drowsiness Features:-
In this whatever parts are labeled by annotators while annotation all is given to the AI-based model as features to learn patterns to recognize drowsiness based on a percentage of eyelid closure, maximum closure duration, blink frequency, average opening level of the eyes, opening velocity of the eyes, and closing velocity of the eyes.
We will apply trained models in real-world scenarios. For testing, also we have to follow two stages, which are in training face that are Eye Movement Detection and Drowsiness Features. In addition, we will apply some more techniques to capture eye movements.
After the ASM algorithm dictates the eye part, deformable templates are used to describe the eyes. The driver’s posture and eye location speed are improved using a mean-shift algorithm as the eye tracker. During the following stage, the search space can be diminished since the system has an estimation of the eye’s position from the last frame. Every time the driver is directly opposite the camera, the ASM algorithm is used to reset the eye region. Trained models predict the eye position frames; if features are nearly closing to drowsiness, then the alert sound will be activated.