07.10.2020
16:08

A team of researchers from Google Research developed a real-time detection model based on estimating poses that can identify people while communicating with signs

October 7, 2020 Share on FacebookShare Share on TwitterTweet Share on WhatsAppShare

Google is working on a system to identify sign language in video calls.

Most video calling services use systems to highlight people who speak aloud in group meetings, which is inconvenient for people with hearing problems when communicating using sign language.

To solve this problem, a team of researchers from Google Research has developed a real-time sign language detection model based on the estimation of poses that can identify people as speakers while communicating with this language.

The system developed by Google, presented at the European computer vision conference ECCV’20, uses a lightweight design that reduces the amount of CPU load required to run it, so as not to affect the quality of calls.

The tool uses a model for estimating the poses of arms and hands, known as PoseNet , which reduces the image data to a series of markers on the eyes, nose, hands and shoulders of the users, among others, so that it is detected also movement.

Google’s model shows close to 80 percent effectiveness in detecting people who speak sign language when it uses only 0.000003 seconds of data, while if the previous 50 frames are used, the effectiveness rises to 83.4 percent. hundred.

Google Sign Language

Likewise, the researchers added an additional layer to the model, of long and short-term memory architecture, which includes “memory over previous time steps, but without going back”, and with which it achieves an effectiveness of 91.5 percent in just 3.5 milliseconds.

To improve the accessibility of videoconferencing platforms, the researchers have made their tool compatible with them, so that it can be used to designate those who use sign language as ‘speakers’.

This system emits ultrasonic sound waves when it warns a person using this language, so that people cannot perceive them but its speech detection technologies can, thus highlighting the user in video calls.

“To better understand how well the demo works in practice, we conducted a user experience study in which participants were asked to use our experimental demonstration during a video conference and communicate using sign language as usual. . They were also asked to sign off on each other and on the speaking participants to test the speaker switch behavior. Participants responded positively when sign language was being detected and treated as audible speech, and that the demonstration successfully identified the signing attendee and activated the conference system’s audio meter icon to draw attention to it. assistant who signed ”, is mentioned in the released statement.

The researchers have published their detection model in open source on the GitHub platform and hope that their technology can “be leveraged to allow sign language speakers to use video conferencing more conveniently.”

(With information from Portaltic)

MORE ON THIS TOPIC:

Source

By magictr

Leave a Reply

Your email address will not be published. Required fields are marked *