diff --git a/docs/Goal and project description.md b/docs/Goal and project description.md index ce90ecd..2c3f514 100644 --- a/docs/Goal and project description.md +++ b/docs/Goal and project description.md @@ -1,13 +1,13 @@ The goal of this project is to bring an easy to set up hardware and software wise, fast and accurate eye and/or mouth tracking solution for any headset that allows for mounting the eye tracking cameras and/or mouth tracking camera in or under the headset (in case of mouth tracking). -It is also open source and open hardware meaning anyone can build it for them selfes and use it free of charge. +It is also open source and open hardware meaning anyone can build it for themselves and use it free of charge. -This project is highly inspired by the excelent Full Body Trakcing solution - [SlimeVR](https://docs.slimevr.dev) -and is aimging to do the same - make the eye and mouth tracking technology independted of the headset so that anyone can use it with anything they want and also make it cheaper. +This project is highly inspired by the excelent Full Body Tracking solution - [SlimeVR](https://docs.slimevr.dev) +and is aimging to do the same - make the eye and mouth tracking technology independent of the headset so that anyone can use it with anything they want and also make it cheaper. Goals in phases: - Iris tracking - we can detect the iris and translate its location to an avatar though OSC / websockets - Eyelid blink tracking - we can detect blinking of the user and translate it to the character - Full eyelid tracking - we can detect and estimate the position of eyelids and translate it to the character -- Mouth tracking - we can detect and estimate different mouth poses the user is making and translate them to the character +- Mouth tracking - we can detect and estimate different mouth poses the user is making and translate them to the character \ No newline at end of file diff --git a/docs/Hardware.md b/docs/Hardware.md index b8c6c66..cddf34e 100644 --- a/docs/Hardware.md +++ b/docs/Hardware.md @@ -35,4 +35,4 @@ Shave off the IR filter off of any camera module: https://marksbench.com/electronics/removing-ir-filter-from-esp32-cam/ #### Ommited because of safety -I'm ommiting any IR emmiters here purposfully - I have no idea if any of them are safe to use for prolonged periods of time and thus I leave them out. +I'm ommiting any IR emmiters here purposfully - I have no idea if any of them are safe to use for prolonged periods of time and thus I leave them out. \ No newline at end of file diff --git a/docs/Software.md b/docs/Software.md index 62e1f91..a9a427a 100644 --- a/docs/Software.md +++ b/docs/Software.md @@ -7,12 +7,12 @@ The goal is to: Now, having that out of the way, how should we process the data? It's being sent to us as an uncompressed stream from the ESP. -We could use the OpenCV for image processing but what out detection? +We could use the OpenCV for image processing, but what about detection? DLib has a very good eye / face detection pre-trained model There are some ready-to-go solutions for openCV but won't they be too heavy? -How about training out own CNN model based on yey data sets? +How about training our own CNN model based on eye datasets? They would require labeling but they were used with great success by others. Datasets - https://datagen.tech/blog/eye-datasets/