My database uses a lot of InnoDB related tables. I have a database with objects that need to be searched for a) full text matches b) relational matches.
At the moment I have a sphinx instance indexing the title and description of these objects. They also have a lot of HABTM relationships with other objects and these relationships need to be part of the search filter.
I thought of having a separate "keywords" table which would have a cached counter in it to see the most common keywords (which is data I would like to have). Sort of like a tagging system.
I was wondering what the best way of doing this is.
I wonder if it would be a better idea to create a mirror table under MyISAM holding only the fields that need to be full-text searched. This could also concatenate all the tags and habtm data into other fields so that they would be easily filterable in one query not several...
I'd use a combination of all of the above.
Try to break the full-scale problem down into smaller bits and pieces.
Don't be afraid to use subqueries and/or temporary tables to assist you with your searches.
Searches tend to be complicated beasts at the heart.
Many softwares use a dedicated table for search results and assign each search its own search id. For example on these forums: http://www.webdeveloper.com/forum/se...archid=4355095
Some use a temporary tables such as metro.net
They do such when trying to find a bus route path between two points.
Its obvious they use a temporary table since if you try to reload your page after 15 or so minutes it says the table doesn't exist.
I'd even recommend a cronjob to keep your "cached" common keywords search results up to date.
Yet try to avoid such solutions if the data being cached will constantly be out of date and tick off the end user resulting-in no results.