I'm trying to optimize some queries on an existing system, and I'm
noticing some odd behavior. I'm performing a join between several
tables, the final table being joined by the optimizer has a clustered
index on the field that it is using to join to the rest of the query,
but for some reason SQL Server doesn't seem to actually use this index
(it's doing an index scan instead of an index seek). Is there some
reason why SQL Server would not use a valid Clustered Index? I've
dropped and readded the index, but that doesn't seem to help. I don't
know if it would be relevant, but the tables I'm working on are fairly
fat (2 to 7K bytes/row).
This is happening for several tables. I've been able to get around it
for some of the tables by creating a non-clustered index on all the
fields that are being queried so that the leaf pages don't need to be
loaded, but this isn't a valid solution for all of the tables I'm
struggling with.
Any ideas? (and no, they aren't willing to redesign any of the
tables).
noticing some odd behavior. I'm performing a join between several
tables, the final table being joined by the optimizer has a clustered
index on the field that it is using to join to the rest of the query,
but for some reason SQL Server doesn't seem to actually use this index
(it's doing an index scan instead of an index seek). Is there some
reason why SQL Server would not use a valid Clustered Index? I've
dropped and readded the index, but that doesn't seem to help. I don't
know if it would be relevant, but the tables I'm working on are fairly
fat (2 to 7K bytes/row).
This is happening for several tables. I've been able to get around it
for some of the tables by creating a non-clustered index on all the
fields that are being queried so that the leaf pages don't need to be
loaded, but this isn't a valid solution for all of the tables I'm
struggling with.
Any ideas? (and no, they aren't willing to redesign any of the
tables).
Comment