I've developed a CMS that manages content across several sites. The
content is represented in a tree structure maintained in a database,
where each item has an id and a parent that points to the containing
item or 0 if it's a root item (a site).
I developed a fulltext search script for the CMS that will return
pages ordered by relevance.
This is fine, but I've now had a change request whereby searches can
be restricted to specific sites. This is a problem because the only
way to know what site an item belongs to is to fetch it, walk up it's
tree and look at it's root node.
I can see 2 possible solutions, but both have drawbacks. The first is
to just fetch all matches and then discard ones that don't belong to
the specified site. This means increased DB load and traffic between
the DB and the webserver and could potentially cause quite a
performance hit when the number of pages being dealt with climbs into
the thousands.
The other solution is to, in addition to the parent node, also store
the root node in the database. This would give an extra field to
match against and make eliminating items with an SQL query, thus
reducing load and traffic, quite simple. However the big drawback is
that there are now two fields determining where an item fits into the
tree structure instead of just one, making it an obvious source of
potential error. It would also mean updating code related to item
creation updating that wouldn't have to be touched with the other
approach.
If any of you guys out there have another potential solution that I
might have not thought of I'd like to hear what they are. Failing
that, which of the options I do have would you recommend?
content is represented in a tree structure maintained in a database,
where each item has an id and a parent that points to the containing
item or 0 if it's a root item (a site).
I developed a fulltext search script for the CMS that will return
pages ordered by relevance.
This is fine, but I've now had a change request whereby searches can
be restricted to specific sites. This is a problem because the only
way to know what site an item belongs to is to fetch it, walk up it's
tree and look at it's root node.
I can see 2 possible solutions, but both have drawbacks. The first is
to just fetch all matches and then discard ones that don't belong to
the specified site. This means increased DB load and traffic between
the DB and the webserver and could potentially cause quite a
performance hit when the number of pages being dealt with climbs into
the thousands.
The other solution is to, in addition to the parent node, also store
the root node in the database. This would give an extra field to
match against and make eliminating items with an SQL query, thus
reducing load and traffic, quite simple. However the big drawback is
that there are now two fields determining where an item fits into the
tree structure instead of just one, making it an obvious source of
potential error. It would also mean updating code related to item
creation updating that wouldn't have to be touched with the other
approach.
If any of you guys out there have another potential solution that I
might have not thought of I'd like to hear what they are. Failing
that, which of the options I do have would you recommend?
Comment