Thanks very much to all of you!
So the bottom line is, that search engines don't encourage such practices and it is probably against Google Guidelines, right?
I was surprised, that while googling for for something similar, not many significant hits were to be found.
Is demand for such a technique so low?
Just wondering, when thinking about the perfect search engine bot, I would have it look at the page after its load in a suitable virtual machine.
(do you know what I mean: comparing the raw HTML to post-load HTML)
They might not be doing it yet, but do you reckon they'll do that in the future?
I personally don't think parsing JS is reliable from search engine point-of-view, because you could e.g. defer some work to a central server, in combination with JS, that initiates and delegates the job.
That would be completely invisible for any spider.
So iFrames are an option, too, but I suppose that if someone has JS disabled, things get messy (or what does the user see?).
Afa the solution I was thinking of is concerned, the alternate links would be visible to users who have JS disabled - at least one disadvantage.
I'm just amazed that there is no official way to link to content without forwarding PageRank.
(that used to be the case with the old "nofollow")