SQL Server 2000 and Milliseconds (datetime data type)

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • vikram.mankar@gmail.com

    SQL Server 2000 and Milliseconds (datetime data type)

    I'm running into a constant issue of SQL Server modifying the
    millisecond part of a timestamp insert from another application. The
    application inserts timestamp which includes a millisecond portion as a
    string (varchar). But when an SQL Server moves this data to another
    table (for reporting), the string is inserted in a datetime field, the
    millisecond field invariably changes by 1-2 milliseconds for every
    single data point inserted. Given the time critical nature of this data
    (to a millisecond), its almost impossible to avoid this other than to
    leave the data as string type. But this drives the analytical reporting
    folks wild as report queries based on time criteria are getting messed
    up. Any ideas how to force SQL Server not to mess around with the
    millisecond value? Does this problem exist with SQL Server 2005 as well?

  • Roy Harvey

    #2
    Re: SQL Server 2000 and Milliseconds (datetime data type)

    This is a documented behavior of the datetime datatype. As the Books
    On Line says: "Values are rounded to increments of .000, .003, or .007
    seconds". It applies to all versions, and I would not expect it to
    ever change.

    If you must keep it to the exact millisecond then you can not use
    datetime.

    You could split the information into two columns, say one part for the
    date (could use smalldatetime) and the other for milliseconds since
    midnight. Or, since smalldatetime is to the minute the second column
    would just have seconds and milliseconds. There are countless
    variations possible, none will make processing simple.

    Roy Harvey
    Beacon Falls, CT

    On 18 Jan 2007 11:15:14 -0800, vikram.mankar@g mail.com wrote:
    >I'm running into a constant issue of SQL Server modifying the
    >millisecond part of a timestamp insert from another application. The
    >application inserts timestamp which includes a millisecond portion as a
    >string (varchar). But when an SQL Server moves this data to another
    >table (for reporting), the string is inserted in a datetime field, the
    >millisecond field invariably changes by 1-2 milliseconds for every
    >single data point inserted. Given the time critical nature of this data
    >(to a millisecond), its almost impossible to avoid this other than to
    >leave the data as string type. But this drives the analytical reporting
    >folks wild as report queries based on time criteria are getting messed
    >up. Any ideas how to force SQL Server not to mess around with the
    >millisecond value? Does this problem exist with SQL Server 2005 as well?

    Comment

    • vikram.mankar@gmail.com

      #3
      Re: SQL Server 2000 and Milliseconds (datetime data type)

      I should have known that! Darn...

      Thanks.

      On Jan 18, 3:58 pm, Roy Harvey <roy_har...@sne t.netwrote:
      This is a documented behavior of the datetime datatype. As the Books
      On Line says: "Values are rounded to increments of .000, .003, or .007
      seconds". It applies to all versions, and I would not expect it to
      ever change.
      >
      If you must keep it to the exactmillisecon dthen you can not use
      datetime.
      >
      You could split the information into two columns, say one part for the
      date (could use smalldatetime) and the other for milliseconds since
      midnight. Or, since smalldatetime is to the minute the second column
      would just have seconds and milliseconds. There are countless
      variations possible, none will make processing simple.
      >
      Roy Harvey
      Beacon Falls, CT
      >
      On 18 Jan 2007 11:15:14 -0800, vikram.man...@g mail.com wrote:
      >
      >
      >
      I'm running into a constant issue ofSQLServermodi fying the
      millisecondpart of a timestamp insert from another application. The
      application inserts timestamp which includes amillisecondpor tion as a
      string (varchar). But when anSQLServermove s this data to another
      table (for reporting), the string is inserted in a datetime field, the
      millisecondfiel d invariably changes by 1-2 milliseconds for every
      single data point inserted. Given the time critical nature of this data
      (to amillisecond), its almost impossible to avoid this other than to
      leave the data as string type. But this drives the analytical reporting
      folks wild as report queries based on time criteria are getting messed
      up. Any ideas how to forceSQLServern ot to mess around with the
      millisecondvalu e? Does this problem exist withSQLServer20 05 as well?- Hide quoted text -- Show quoted text -

      Comment

      Working...