Cankurtaran, Halil SaidBoyaci, AliYarkan, Serhan2024-10-122024-10-122020978-1-7281-7206-42165-0608https://hdl.handle.net/11467/863428th Signal Processing and Communications Applications Conference (SIU) -- OCT 05-07, 2020 -- ELECTR NETWORKProcessing human speech with the use of digital technologies leads to several important fields of research. Speech-to-text and lip-syncing are among the instances of relevant prominent research areas. In this regard, audio-visualization of acoustic signals, providing visual aid in real-time for disabled people, and realization of text-free animation applications are just to name a few. Therefore, in this study, a language-independent lip-sync method that is based on extended linear predictive coding is proposed. The proposed method operates on baseband electrical signal that is acquired by a standard single-channel off-the-shelf microphone and exploits the statistical characteristics of acoustic signals produced by human speech. In addition, the proposed method is implemented on an embedded system, tested, and its performance is evaluated. Results are given along with discussions and future directions.trinfo:eu-repo/semantics/closedAccessformant frequencylinear predictive codinglip syncAn Acoustic Signal Based Language Independent Lip Synchronization Method and Its Implementation via Extended LPCConference ObjectN/AWOS:000653136100350N/A2-s2.0-85100299991