Apple is reportedly working on ways to make its Siri voice assistant more useful to people with atypical speech patterns, such as stuttering, The Wall Street Journal claims.
The report cites a new research paper on the subject recently published by Apple. It notes that Apple has built a database of 28,000 audio clips, taken from podcasts, that feature stuttering language. These are being used to train the Siri models which help Apple's AI assistant to understand what is being said.
Improving Siri for Users With Atypical Speech
An Apple spokesperson told the Wall Street Journal that Apple is doing this to improve voice recognition for users with atypical speech patterns, although declined to provide further information.
Apple already offers a Hold to Talk feature, which allows users to control how long they want Siri to listen for. This feature, introduced in 2015, means that users who need extra time to make requests can do so without Siri cutting them off.
To use the Hold to Talk feature on an iPhone, hold down the button to activate Siri (that's the lock button on the present generation iPhones) for as long as you wish to speak.
As an alternative, users can opt to switch to interacting with Siri using written requests. To do this, they should open the Settings app > Accessibility > Siri. They can then toggle Type to Siri on and off.
Unlike features like Hold to Talk, the Wall Street Journal suggests that Apple's new technology will "automatically detect" if a person speaks with a stutter.
It's not clear when this feature will be made available in future Apple products. It could potentially be added as an update in a future version of iOS 14. Alternately, if Apple wants to make a big splash with the feature it could debut it in iOS 15, and make a point of talking it up at this year's Worldwide Developers Conference. That event---likely to take place in June---is where Apple usually shows off the innovations it will introduce with its forthcoming firmware updates.
Siri Is 10 Years Old This Year
This October will mark a decade since Apple introduced Siri with the iPhone 4s. While virtual assistants are, today, fairly ubiquitous---and some may argue that Apple has fallen behind the likes of Google and Amazon in working on them---Apple can claim credit for being the first of the tech giants to introduce them to a wide audience.
Ten years on, Apple is continuing to add new features to Siri, making its AI assistant smarter all the time. Another possible future improvement for Siri includes being able to figure out how far away a user's voice is, and respond accordingly.
As stories like this suggest, Apple is keen to carry on innovating in the field it helped kickstart.
据报道,苹果公司正在研究如何让siri对具有口吃等语言问题的人更加适用。
苹果公司已经建立了数据库,以口吃语言为特征来帮助训练改进siri。
苹果在此前,已经有了“按住说话”的功能,按住按键来激活siri。
但是苹果的新功能与“按住说话”不同,可以自动判断用户是否有口吃问题。
今年将是苹果在iPhone上推出Siri的十周年。十年以来,苹果一直在为Siri添加新功能,让它一直更智能。
Siri未来的另一项可能的改进包括了能够计算出用户的声音的距离并且做出回应。
像这样的事件表明,苹果热衷于在帮助人们都领域里更加创新。
(因为链接复制不了,所以把原文附上了) |