X-Git-Url: https://adrianiainlam.tk/git/?p=mouse-tracker-for-cubism.git;a=blobdiff_plain;f=config.txt;h=1591db14a55a547c53df3baa9c603a3bbeac4fdd;hp=9c28ceeb21ce1ab018f9569ea084b1d1d551b374;hb=126d8fa4d0ea5d05ed56d2b318e747426317808d;hpb=2b1f0c7c63dc30d20c9d55fda5e99b208a9f82b3 diff --git a/config.txt b/config.txt index 9c28cee..1591db1 100644 --- a/config.txt +++ b/config.txt @@ -1,148 +1,54 @@ -# Config file for FacialLandmarksForCubism +# All coordinates used here are the ones used by the xdo library. +# For a sense of how these work, run "xdotool getmouselocation". -# The path of this config file should be passed to the constructor -# of the FacialLandmarkDetector. +# Milliseconds to sleep for between each update +sleep_ms 5 -# Comments are lines that start with a '#' and are ignored by the parser. -# Note that a line will be considered as a comment ONLY IF the '#' is the -# very first character of the line, i.e. without any preceeding whitespace. - - -## Section 1: dlib face detection and webcam parameters - -# Path to the dlib shape predictor trained dataset -predictorPath ./shape_predictor_68_face_landmarks.dat - -# Value passed to the cv::VideoCapture() ctor -cvVideoCaptureId 0 - -# Number of milliseconds to wait after processing each video frame -# This value controls the frame rate, but the actual frame period -# is longer due to the time required to process each frame -cvWaitKeyMs 5 - -# If 1, show the webcam captured video on-screen; if 0, don't show -showWebcamVideo 1 - -# If 1, draw the detected facial landmarks on-screen; if 0, don't draw -renderLandmarksOnVideo 1 - -# If 1, laterally invert the image (create a mirror image); if 0, don't invert -lateralInversion 1 - - -## Section 2: Cubism params calculation control -# -# These values control how the facial landmarks are translated into -# parameters that control the Cubism model, and will vary from person -# to person. The following values seem to work OK for my face, but -# your milage may vary. - -# Section 2.0: Live2D automatic functionality -# Set 1 to enable, 0 to disable. -# If these are set, the automatic functionality in Live2D will be enabled. -# Note: If you set auto blink, eye control will be disabled. -autoBlink 0 -autoBreath 0 +# Automatic functionality in Live2D +autoBlink 1 +autoBreath 1 randomMotion 0 - -# Section 2.1: Face Y direction angle (head pointing up/down) -# The Y angle is calculated mainly based on the angle formed -# by the corners and the tip of the nose (hereafter referred -# to as the "nose angle"). - -# This applies an offset (in degrees). -# If you have a webcam at the top of your monitor, then it is likely -# that when you look at the centre of your monitor, the captured image -# will have you looking downwards. This offset shifts the angle upwards, -# so that the resulting avatar will still be looking straight ahead. -faceYAngleCorrection 10 - -# This is the baseline value for the nose angle (in radians) when looking -# straight ahead... -faceYAngleZeroValue 1.8 - -# ... and this is when you are looking up... -faceYAngleUpThreshold 1.3 - -# ... and when looking down. -faceYAngleDownThreshold 2.3 - -# This is an additional multiplication factor applied per degree of rotation -# in the X direction (left/right) - since the nose angle reduces when -# turning your head left/right. -faceYAngleXRotCorrection 0.15 - -# This is the multiplication factor to reduce by when smiling or laughing - -# the nose angle increases in such cases. -faceYAngleSmileCorrection 0.075 - - -# Section 2.2: Eye control -# This is mainly calculated based on the eye aspect ratio (eye height -# divided by eye width). Note that currently an average of the values -# of both eyes is applied - mainly due to two reasons: (1) the dlib -# dataset I'm using fails to detect winks for me, and (2) if this is -# not done, I frequently get asynchronous blinks which just looks ugly. - -# Maximum eye aspect ratio when the eye is closed -eyeClosedThreshold 0.2 - -# Minimum eye aspect ratio when the eye is open -eyeOpenThreshold 0.25 - -# Max eye aspect ratio to switch to a closed "smiley eye" -eyeSmileEyeOpenThreshold 0.6 - -# Min "mouth form" value to switch to a closed "smiley eye" -# "Mouth form" is 1 when fully smiling / laughing, and 0 when normal -eyeSmileMouthFormThreshold 0.75 - -# Min "mouth open" value to switch to a closed "smiley eye" -# "Mouth open" is 1 when fully open, and 0 when closed -eyeSmileMouthOpenThreshold 0.5 - - -# Section 2.3: Mouth control -# Two parameters are passed to Cubism to control the mouth: -# - mouth form: Controls smiles / laughs -# - mouth openness: How widely open the mouth is -# Mouth form is calculated by the ratio between the mouth width -# and the eye separation (distance between the two eyes). -# Mouth openness is calculated by the ratio between the lip separation -# (distance between upper and lower lips) and the mouth width. - -# Max mouth-width-to-eye-separation ratio to have a normal resting mouth -mouthNormalThreshold 0.75 - -# Min mouth-width-to-eye-separation ratio to have a fully smiling -# or laughing mouth -mouthSmileThreshold 1.0 - -# Max lip-separation-to-mouth-width ratio to have a closed mouth -mouthClosedThreshold 0.1 - -# Min lip-separation-to-mouth-width ratio to have a fully opened mouth -mouthOpenThreshold 0.4 - -# Additional multiplication factor applied to the mouth openness parameter -# when the mouth is fully smiling / laughing, since doing so increases -# the mouth width -mouthOpenLaughCorrection 0.2 - - -## Section 3: Filtering parameters -# The facial landmark coordinates can be quite noisy, so I've applied -# a simple moving average filter to reduce noise. More taps would mean -# more samples to average over, hence smoother movements with less noise, -# but it will also cause more lag between your movement and the movement -# of the avatar, and quick movements (e.g. blinks) may be completely missed. - -faceXAngleNumTaps 11 -faceYAngleNumTaps 11 -faceZAngleNumTaps 11 -mouthFormNumTaps 3 -mouthOpenNumTaps 3 -leftEyeOpenNumTaps 3 -rightEyeOpenNumTaps 3 - +useLipSync 1 + +# Lip sync configurations + +# Gain to apply to audio volume. +# (Linear gain multiplying audio RMS value) +lipSyncGain 10 + +# Cut-off volume for lip syncing. +# If the volume is below this value, it will be forced to zero. +# This can be useful if you don't have a good quality microphone +# or are in a slightly noisy environment. +# Note: this cut-off is applied *after* applying the gain. +lipSyncCutOff 0.15 + +# Audio buffer size. This is the window size over which we calculate +# the "real-time" volume. A higher value will give a smoother +# response, but will mean a higher latency. +# The sampling rate is set to 44100 Hz so 4096 samples should +# still have a low latency. +audioBufSize 4096 + +# Mouth form +# Set this to 1 for a fully smiling mouth, 0 for a normal mouth, +# and -1 for a frowning mouth, or anything in between. +mouthForm 0 + +# Screen number. Currently tracking is supported only in one screen. +# If you have multiple screens, select the ID of the one you want to track. +screen 0 + +# The "middle" position, i.e. the coordinates of the cursor where +# the Live2D model will be looking straight ahead. +# For a 1920x1080 screen, {1600, 870} will be somewhere near the +# bottom right corner. +middle_x 1600 +middle_y 870 + +# The bounding box. These are the limits of the coordinates where the +# Live2D model will be looking 30 degrees to each side. +top 0 +bottom 1079 +left 0 +right 1919