How contents ob Blogs/Articles are crawled/indexed by Google or other Search Engines?
This is technical question. I am planning to create a website which will have articles/questions based on technical subject).
As faras I know Google return the result (text search) that best matches from the content in the static web page (correct if I am wrong).
Eg: If I search 'xyz' in Google search, it will return all the results where 'xyz' matches i.e from the static text.
Millions are blogs, articles are written everyday, and of-course they would be saved in databases and not a static file would be created for every article. So how come search engines are allowed to search in to the DB to retrieve the result.
I hope my question is clear to all. Answers would help me in designing a website.
Why don't you just go and ask Google about that?
The main task of the Google crawler is check all the sites and collect the new information of those sites. So now when we submitted the New Blog or Article and when it is approved then it is stable on those sites in which we posting the blog ot article. Now when crawler crawle those site then it get our blog of article so many time from different sites. It also collects our url, keywords etc. from blog or article of different sites. And then it will increase our keyword rank up. So more approved blogs or Article help us to increase the keyword rank like those process.
I've never heard about this, but if you want to try, you can always use apache's mod_rewrite to construct static like url (with .html or .htm) and remove the http headers which might reveal the technology used behind. So that Google sees your pages like static ones.
But I really dougt that this will make any difference. Better concentrate on good content and link building for better positioning in Google.
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)