What makes the app particularly interesting to me is its contextual and geolocation awareness.
To speak, someone with a speech disability can select a ‘button’ on the mobile tablet (or smartphone) and then have the selected word or typically phrase spoken by the tablet. Obviously the list of options is limited by the screen size. TalkRocket gets around this by intelligently selecting the available options based on context – either what has already been said, what has been pre-saved or where the user is located.
For example, if the user enters a coffee shop, the GPS function of the tablet identifies this and ordering options are presented. These are referred to as placed-based vocabularies or “locabularies.”
Here is an 11 minute overview video if you would like to learn more about TalkR: