Gaze tracking using ML and AI

Ad monitoring using gaze tracking and ML. Through this way of monitoring we can generate ads which can attract us
components
Hardware Components
LED
X 5
LDR
X 1
Software Apps and online services
Xampp
To make the database
OpenCV
For image processing
Python
Coding
details

IMG_20190915_204146.jpg

INTRODUCTION

 

Marketing techniques like advertisements can be improved a lot with the help of gaze tracking. Imagine when we open a website that will open another bundle of ads but all these ads may not be catchy to the user only a few can be catchy the rest will be junk. Now we can convert these junk ads into useful ones with the help of data provided by gaze tracking. When we consider a particular person his gaze tracking will be different from others so if we can get the data of his gaze tracking like when we put a picture in front of him and which part of that picture attracts him the most like color, heading, image etc. so we can use this data to make ads with the content that attracted him the most. Now we can have his attention towards our ad. With the help of this ad, our company can do better marketing in the promotion. This data can be used in online shopping also instead of showing recommends from 2 weeks search history we can make the recommends within one-day gaze tracking. Gaze tracking can be more precise in calculating a person’s interests than the rest of the marketing techniques. This can save time and can be given more precise data which will result in a better result

WORKING

We have a web camera that is connected to a raspberry pi. This camera records the video and using raspberry pi we process the video and locate our eye from this video. Then we can track our eye and find out where are we looking exactly. We look where we like the most and using gaze mapping we will be able to find where we looked first. Our first eye position will be our liked area. So this area can be an image, heading, color or anything. Using this data we can make other ads. So this will improve the marketing of the companies and produce a better result   

Wiznet 6100 ethernet shield which is mounted on an Arduino uno is used to connect to the internet. With the help of the LDR sensor, we will be able to monitor the ambience light. Without ambient light, our camera won’t be able to track our eye. If the light is not sufficient then it triggers an LED to turn on which will produce the light we want. We also have a database which will store the data. Right now we will be able to get which color the user will like the most and with that result, we will be able to make our ad which can be more attracted to that particular user. So for different users the same ad will be different.

 

  1. Import all these libraries

 

import cv2

import numpy as np

import dlib

from math import hypot

import pymysql

 

  1. Get the live recoding of the video and detect our face from that video

 

cap = cv2.VideoCapture(0)

detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)

 

 

  1. Find our eye from that video using the points given in

 

shape_predictor_68_face_landmarks.dat

 

 

 

  #gaze detection

       left_eye_region = np.array([ (facial_landmarks.part(eye_points[0]).x, facial_landmarks.part(eye_points[0]).y),

                                    (facial_landmarks.part(eye_points[1]).x, facial_landmarks.part(eye_points[1]).y),

                                    (facial_landmarks.part(eye_points[2]).x, facial_landmarks.part(eye_points[2]).y),

                                    (facial_landmarks.part(eye_points[3]).x, facial_landmarks.part(eye_points[3]).y),

                                    (facial_landmarks.part(eye_points[4]).x, facial_landmarks.part(eye_points[4]).y),

                                    (facial_landmarks.part(eye_points[5]).x, facial_landmarks.part(eye_points[5]).y)], np.int32)

 

       cv2.polylines(frame, [left_eye_region], True, (00255),2)         

 

  1. Using these Points we can find whether our eye is closed or not

 

def get_blinking_ratio(eye_pointsfacial_landmarks):

     #left eye   

        left_point = (facial_landmarks.part(eye_points[0]).x, facial_landmarks.part(eye_points[0]).y)

        right_point = (facial_landmarks.part(eye_points[3]).x, facial_landmarks.part(eye_points[3]).y)

        center_top = midpoint(facial_landmarks.part(eye_points[1]), facial_landmarks.part(eye_points[2]))

        center_bottom = midpoint(facial_landmarks.part(eye_points[5]), facial_landmarks.part(eye_points[4]))

 

        ver_line = cv2.line(frame, center_top, center_bottom, (02550), 2)

        hor_line = cv2.line(frame, left_point, right_point, (02550), 2)

 

         #blinking

        hor_line_lenght = hypot((left_point[0] – right_point[0]), (left_point[1] – right_point[1]))

        ver_line_lenght = hypot((center_top[0] – center_bottom[0]), (center_top[1] – center_bottom[1]))

        ratio = hor_line_lenght/ver_line_lenght

        return ratio 

 

  1. Next we need to make our eye in to a gray image and then to a threshold image. Using these threshold image we will calculate our white part and find where we are looking

 

 

 

left_eye_region = np.array([ (facial_landmarks.part(eye_points[0]).x, facial_landmarks.part(eye_points[0]).y),

                                    (facial_landmarks.part(eye_points[1]).x, facial_landmarks.part(eye_points[1]).y),

                                    (facial_landmarks.part(eye_points[2]).x, facial_landmarks.part(eye_points[2]).y),

                                    (facial_landmarks.part(eye_points[3]).x, facial_landmarks.part(eye_points[3]).y),

                                    (facial_landmarks.part(eye_points[4]).x, facial_landmarks.part(eye_points[4]).y),

                                    (facial_landmarks.part(eye_points[5]).x, facial_landmarks.part(eye_points[5]).y)], np.int32)

 

       cv2.polylines(frame, [left_eye_region], True, (00255),2)         

       #print(left_eye_region)                 

       

       

       height, width, _ = frame.shape

       mask = np.zeros((height, width), np.uint8)         

       cv2.polylines(mask, [left_eye_region], True255,2

       cv2.fillPoly(mask, [left_eye_region], 255

       eye = cv2.bitwise_and(gray, gray, mask=mask)

        

       min_x = np.min(left_eye_region[:, 0])

       max_x = np.max(left_eye_region[:, 0])

       min_y = np.min(left_eye_region[:, 1])

       max_y = np.max(left_eye_region[:, 1])

 

       

       gray_eye = eye[min_y: max_y, min_x: max_x]

       _, threshold_eye = cv2.threshold(gray_eye, 70255, cv2.THRESH_BINARY)

       threshold_eye = cv2.resize(threshold_eye, Nonefx = 5fy = 5)

       height, width = threshold_eye.shape  

 

  1. From these we will divide our eye into two parts and calculate the gaze ratio

 

left_side_threshold = threshold_eye[0: height, 0int(width / 2)]

       left_side_white = cv2.countNonZero(left_side_threshold)

 

       right_side_threshold = threshold_eye[0: height, int(width / 2):  width] 

       right_side_white = cv2.countNonZero(right_side_threshold)

 

       if left_side_white == 0:

          gaze_ratio = 1

       elif right_side_white == 0:

          gaze_ratio = 2

       else:         

          gaze_ratio = left_side_white / right_side_white

 

       return gaze_ratio

 

 

  1. Using these Gaze ratio we will find where we are looking at and we will store our value to a database

and using these data we will be able to generate ads according to our will

 

Connection to database

 

conn = pymysql.connect(“localhost”“root”“”“eye_tracking”)

cursor = conn.cursor()

name1 = input(”   Enter your name   =    “)

f = open(“%s.txt”%name1, “w+”)

 

Tracks our eye give result

if 0 < gaze_ratio <= 2:

             new_frame[:] = (00255)

             cv2.putText(frame, “CENTER”, (50100), font, 2, (00255), 3)

             sql = INSERT INTO color \

                  (blue)\

                      VALUES(%d)” % \

                            (s + 1)

             cursor.execute(sql)

             conn.commit()

       elif  gaze_ratio == 0:

             cv2.putText(frame, “RIGHT”, (50100), font, 2, (00255), 3)  

             sql = INSERT INTO color \

                  (black)\

                       VALUES(%d)” % \

                            (s + 1)

             cursor.execute(sql)

             conn.commit()

       elif  gaze_ratio >2:

             new_frame[:] = (2550 , 0)

             cv2.putText(frame, “LEFT”, (50100), font, 2, (00255), 3)  

             sql = INSERT INTO color \

                  (red)\

                       VALUES(%d)” % \

                            (s + 1)

             cursor.execute(sql)

             conn.commit()

     

       cv2.imshow(“Frame”, frame)

       cv2.imshow(“New frame”, new_frame)

       cv2.imshow(“Threshold eye”, threshold_eye)

       cv2.imshow(“Gray eye”, eye)

       cv2.imshow(“LEFT”, left_side_threshold)

       cv2.imshow(“RIGHT”, right_side_threshold)

 

    key = cv2.waitKey(10)

    if key == 27:

         break

 

 

Enter the result into database

 

query = INSERT INTO CUSTOMER(NAME,RED,BLUE,BLACK) \

          VALUES (‘%s’,(SELECT SUM(red) from color),(SELECT SUM(blue) from color),(SELECT SUM(black) from color))” \

          %(name1)

  cursor.execute(query)

  cursor.execute(DELETE FROM color”)

   sql1 = SELECT * FROM CUSTOMER”

   sql2 = cursor.execute(sql1)

   sql2 = cursor.fetchall()

  for i in sql2:

        f.write(“Name  =  %s\n%i[0])

        f.write(“Red = %d\n%i[1])

        f.write(“Blue = %d\n%i[2])

        f.write(“Black = %d\n%i[3])

        f.write(“Green = %d\n%i[4])

        f.write(“Yellow = %d\n%i[5])

 

   conn.commit()

   conn.close()

IMAGE PROCESSING

This is the main part of the project. Our code will capture our eye from the code and make it into grayscale. After converting it into grayscale we will convert our video into threshold mode so in this way we will be able to distinguish pupil from the rest of the eye. There are different ways of image processing you can follow anyways what I do here is little easier but less accurate. After getting the threshold image of my eye I will divide my eye into four parts by drawing a vertical and horizontal line. The vertical line will divide the eye into two parts when we look in the right side observe our pupil and the rest of the eye, there will be an increase in the white part of the eye and this measure of white part will be different when we look on the left side. So we will measure the white part of both our eye and take an average from them. In the same way, we can find top and bottom using horizontal line

documents
Code
Xampp
Database
Image processing
LRD sensor(Arduino)
Schematics
Gaze tracking
LRD with arduino

21
COMMENTS

Please Login to comment
16 Comment authors
albinmatzFililaceAncyLijolukoseAbn2k Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
jacobsmarvel
Member

Superb

georgy
Member

Good Work

Thomas
Member

Great workπŸ‘

AmithBenny
Member

Pwoli mahnπŸ‘

Shijith
Member

Good

W123
Member

i think i seen your project somewhere else !

abhayk
Member

U r in christ clg right?

Ananthakrishnan.v.p
Member

good work I am really impressed hope you will succeed your goals

abhayk
Member

ooo thank u thank uu

abhayk
Member

Ashane pwoli…. Pwolik mwuthe full support πŸ‘πŸ»πŸ‘πŸ»πŸ‘πŸ»πŸ‘πŸ»

abhayk
Member

THIS IS THE COOLEST WORK THAT I HAVE EVER SEEN..KEEP GOING MAN FULL SUPPORT

Neiljames
Member

Nice bro… good work

tonyxavior
Member

Nyz

Anandu
Member

Good job guys

Abn2k
Member

Impressive ❀

Abn2k
Member

Just Brilliant 😍

Lijolukose
Member

Good work

Ancy
Member

Well done.Superb……

Fililace
Member

I tried this but dosen’t work πŸ™