Why does C# host application change long double precision of C++ dll?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Ostrenko
    New Member
    • Jun 2010
    • 1

    Why does C# host application change long double precision of C++ dll?

    Hello all,

    I developed dll in Intel C++ that applies long double arithmetic operations.
    My dll allocates 128 bit for each long double variable and performs 19 digits precision operations if I use C++ or Delphi host application.
    In case of C# host application the dll allocates 128 bit also but number of significant digits is reduced to 15 (same with double).
    How does it possible and what do I have to do to increase a number of significant digits to 19?

    Thanks.
  • GaryTexmo
    Recognized Expert Top Contributor
    • Jul 2009
    • 1501

    #2
    I don't think C# has support for a 128-bit floating point type. I did some googling and found this...



    I think if you want support for this, you'll either have to write a class in C# to support it, or create a wrapper class in C++ that you can use in C#.

    Comment

    Working...