For example, if the comments on a blog is loaded by Ajax (which I know many blogs do), then those comments might not be indexed by web crawlers. This is a sad loss for us web developers because the number of keywords and the content of those comments are not being indexed.
Another idea I had was to use .htaccess to secretly redirect the web crawlers to a fake copy of the document that contains nothing but the description. The web crawler would then index the page with that text, but when users (who are not web crawlers) access the page, they are not redirected and thus gets the real application instead. This solution scares me a little though, as I don't think most search engines would look kindly upon being tricked to the content of a webpage. Indeed, this trick could be used for malicious purposes, and it wouldn't surprise me if big search engines like Google would punish you and not index your page if they found out.
“The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.”
—Tim Berners-Lee, W3C Director and inventor of the World Wide Web