I seem to have a few duplicates entries in a very large database.
My database has two identical tables. One is written to on a regular basis (every few seconds) and the other is used for queries, and updated from the write table on a daily basis.
I'm trying to get rid of duplicate entries for the queries, and I thought the most effective way to do this would be to run the query as
I have also tried the same query, but listed each row instead of using a *.
The problem is this seems to be crashing my dev mysql server. Though SQL is not giving me an error.
My other option is to use the GROUP BY statement on the read table when a read request is made. But I thought building the read table without duplicates would be more efficient, and I have an ORDER BY statement in the read requests as well, and have read that mixing GROUP BY with ORDER BY is a no, no...
I have run the GROUP BY column1, column2 ORDER BY column 1, column6
and it seems to work ok.
Any thoughts on this?
My database has two identical tables. One is written to on a regular basis (every few seconds) and the other is used for queries, and updated from the write table on a daily basis.
I'm trying to get rid of duplicate entries for the queries, and I thought the most effective way to do this would be to run the query as
Code:
INSERT INTO readtable (SELECT * FROM writetable GROUP BY column1, column2);
The problem is this seems to be crashing my dev mysql server. Though SQL is not giving me an error.
My other option is to use the GROUP BY statement on the read table when a read request is made. But I thought building the read table without duplicates would be more efficient, and I have an ORDER BY statement in the read requests as well, and have read that mixing GROUP BY with ORDER BY is a no, no...
I have run the GROUP BY column1, column2 ORDER BY column 1, column6
and it seems to work ok.
Any thoughts on this?
Comment