# very first character of the line, i.e. without any preceeding whitespace.
-## Section 1: dlib face detection and webcam parameters
+## Section 0: OpenSeeFace connection parameters
+osfIpAddress 127.0.0.1
+osfPort 11573
-# Path to the dlib shape predictor trained dataset
-predictorPath ./shape_predictor_68_face_landmarks.dat
-
-# Value passed to the cv::VideoCapture() ctor
-cvVideoCaptureId 0
-
-# Number of milliseconds to wait after processing each video frame
-# This value controls the frame rate, but the actual frame period
-# is longer due to the time required to process each frame
-cvWaitKeyMs 5
-
-# If 1, show the webcam captured video on-screen; if 0, don't show
-showWebcamVideo 1
-
-# If 1, draw the detected facial landmarks on-screen; if 0, don't draw
-renderLandmarksOnVideo 1
-
-# If 1, laterally invert the image (create a mirror image); if 0, don't invert
-lateralInversion 1
-
-
-## Section 2: Cubism params calculation control
+## Section 1: Cubism params calculation control
#
# These values control how the facial landmarks are translated into
# parameters that control the Cubism model, and will vary from person
# to person. The following values seem to work OK for my face, but
# your milage may vary.
-# Section 2.0: Live2D automatic functionality
+# Section 1.0: Live2D automatic functionality
# Set 1 to enable, 0 to disable.
# If these are set, the automatic functionality in Live2D will be enabled.
# Note: If you set auto blink, eye control will be disabled.
autoBreath 0
randomMotion 0
-# Section 2.1: Face Y direction angle (head pointing up/down)
+# Section 1.1: Face Y direction angle (head pointing up/down)
# The Y angle is calculated mainly based on the angle formed
# by the corners and the tip of the nose (hereafter referred
# to as the "nose angle").
faceYAngleSmileCorrection 0.075
-# Section 2.2: Eye control
+# Section 1.2: Eye control
# This is mainly calculated based on the eye aspect ratio (eye height
-# divided by eye width). Note that currently an average of the values
-# of both eyes is applied - mainly due to two reasons: (1) the dlib
-# dataset I'm using fails to detect winks for me, and (2) if this is
-# not done, I frequently get asynchronous blinks which just looks ugly.
+# divided by eye width).
# Maximum eye aspect ratio when the eye is closed
-eyeClosedThreshold 0.2
+eyeClosedThreshold 0.18
# Minimum eye aspect ratio when the eye is open
-eyeOpenThreshold 0.25
+eyeOpenThreshold 0.21
# Max eye aspect ratio to switch to a closed "smiley eye"
eyeSmileEyeOpenThreshold 0.6
# "Mouth open" is 1 when fully open, and 0 when closed
eyeSmileMouthOpenThreshold 0.5
+# Enable winks (experimental)
+# Winks may or may not work well on your face, depending on the dataset.
+# If all you get is ugly asynchronous blinks, consider setting this to
+# zero instead.
+# Also, this seems to not work very well when wearing glasses.
+winkEnable 1
+
-# Section 2.3: Mouth control
+# Section 1.3: Mouth control
# Two parameters are passed to Cubism to control the mouth:
# - mouth form: Controls smiles / laughs
# - mouth openness: How widely open the mouth is
mouthOpenLaughCorrection 0.2
-## Section 3: Filtering parameters
+## Section 2: Filtering parameters
# The facial landmark coordinates can be quite noisy, so I've applied
# a simple moving average filter to reduce noise. More taps would mean
# more samples to average over, hence smoother movements with less noise,
# but it will also cause more lag between your movement and the movement
# of the avatar, and quick movements (e.g. blinks) may be completely missed.
-faceXAngleNumTaps 11
-faceYAngleNumTaps 11
-faceZAngleNumTaps 11
+faceXAngleNumTaps 7
+faceYAngleNumTaps 7
+faceZAngleNumTaps 7
mouthFormNumTaps 3
mouthOpenNumTaps 3
leftEyeOpenNumTaps 3