Large record count

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Ike

    Large record count

    I have a client who wants me to take on a project wherein they have, on
    another table in another DB, about 1.2 million records. I am going to have
    to consider, in this project, a record length of about 200 bytes, and am
    wondering if MYSQL (5.x, on a first-rate Windows server) can handle such
    size. Does anyone have experience with something that large? Thanks, Ike


  • Markus Popp

    #2
    Re: Large record count

    There's no doubt that MySQL itself has no problem with this amount of data.
    Of course, whether it works (fast) depends on a good database design (e.g.
    correct indexing) and that the hardware meets the requirements.

    There are in fact people who complain that MySQL cannot handle a large
    amount of data. But the true reason is almost always that something's wrong
    with design, hardware etc.

    The performence would probably be better on a Linux server, as MySQL is
    better optimized for Linux than for Windows. However, if hardware and
    database design are correct, it should work on Windows, too.

    Markus


    Comment

    • bstjean

      #3
      Re: Large record count

      It should be no problem at all but, as always, every problem is so
      specific we'd need more details about what you'll do with this data:
      mostly updates, inserts, selects, etc? How many concurrent users, etc?

      I used to work at a place where we had a few tables in the 10+ million
      rows with rows averaging 175 bytes if I recall correctly. Most of the
      queries were selects.

      I am now working on a table with 2.2 million rows of size 204 bytes and
      mostly doing selects against it. Other tables are insert-select-delete
      (barely serving as messaging queues) which are pruned every minute.

      Every project is specific and, the easy answer is you can do anything
      as long as you have a good design, proper indexing and good hardware.
      More details about what you want to do would probably give us a chance
      to better estimate your possible bottlenecks/problems.

      You usually have problems when you have queries not based on the PK
      (thus something doing table scans, file sorts, group by, etc), lots of
      joins (especially outer joins) or have lock contention because of many
      concurrent users...

      Comment

      Working...