Best practices for moving large amounts of data using WCF ...

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • =?Utf-8?B?TW9iaWxlTWFu?=

    Best practices for moving large amounts of data using WCF ...

    Hello everyone:

    I am looking for everyone's thoughts on moving large amounts (actually, not
    very large, but large enough that I'm throwing exceptions using the default
    configurations) .

    We're doing a proof-of-concept on WCF whereby we have a Windows form client
    and a Server. Our server is a middle-tier that interfaces with our SQL 05
    database server.

    Using the "netTcpBindings " (using the default config ... no special
    adjustments to buffer size, buffer pool size, etc., etc.) we are invoking a
    call to our server, which invokes a stored procedure and returns the query
    result. At this point we take the rows in the result set and "package" them
    into a hashtable, then return the hashtable to the calling client.

    Our original exception was a time-out exception, but with some
    experimentation we've learned that wasn't the problem .... although it is
    getting reported that way. Turns out it was the amount of data.

    The query should return ~11,000 records from the database. From our
    experimentation we've noticed we can only return 95 of the rows back before
    we throw a "exceded buffer size" exception. Using the default values in our
    app.config file that size is 65,536.

    Not that moving 11,000 records is smart, but to be limited to only 64Kb in a
    communication seems overly restrictive. We can change the value from the
    default, but I wanted to ask what other's are doing to work with larger
    amounts of data with WCF first?

    Are you simply "turning up" the size of the buffer size? Some kind of
    paging technique? Some other strategy?? Having a tough time finding answers
    on this.

    Greatly appreciate any and all comments on this,

    Thanks
    --
    Stay Mobile
  • msgroup

    #2
    Re: Best practices for moving large amounts of data using WCF ...

    Hi, MobileMan:

    You get problems for moving large amounts of data? Our SocketPro at
    www.udaparts.com solves this problem completely with very simple and elegant
    way, non-blocking socket communication. SocketPro is a package of
    revolutionary software components written from batching, asynchrony and
    parallel computation with many attractive and critical features to help you
    easily and quickly develop secured internet-enabled distributed applications
    running on all of window platforms and smart devices with super performance
    and scalability.

    See the attached tutorial three. Let me give you some code here.

    protected void GetManyItems()

    {

    int nRtn = 0;

    m_UQueue.SetSiz e(0);

    PushNullExcepti on();

    while (m_Stack.Count 0)

    {

    //a client may either shut down the socket
    connection or call IUSocket::Cance l

    if (nRtn == SOCKET_NOT_FOUN D || nRtn ==
    REQUEST_CANCELE D)

    break;

    CTestItem Item = (CTestItem)m_St ack.Pop();

    Item.SaveTo(m_U Queue);

    //20 kbytes per batch at least

    //also shouldn't be too large.

    //If the size is too large, it will cost
    more memory resource and reduce conccurency if online compressing is
    enabled.

    //for an opimal value, you'd better test it
    by yourself

    if (m_UQueue.GetSi ze() 20480)

    {

    nRtn =
    SendReturnData( TThreeConst.idG etBatchItemsCTT hree, m_UQueue);

    m_UQueue.SetSiz e(0);

    PushNullExcepti on();

    }

    }

    if (nRtn == SOCKET_NOT_FOUN D || nRtn == REQUEST_CANCELE D)

    {

    }

    else if (m_UQueue.GetSi ze() sizeof(int))

    {

    nRtn =
    SendReturnData( TThreeConst.idG etBatchItemsCTT hree, m_UQueue);

    }

    }

    There are a lot of samples inside our SocketPro to demonstrate how to
    move a lot of database records, large files, a large collection of items
    across machines. See the site






    "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
    news:EAB34BEF-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
    Hello everyone:
    >
    I am looking for everyone's thoughts on moving large amounts (actually,
    not
    very large, but large enough that I'm throwing exceptions using the
    default
    configurations) .
    >
    We're doing a proof-of-concept on WCF whereby we have a Windows form
    client
    and a Server. Our server is a middle-tier that interfaces with our SQL 05
    database server.
    >
    Using the "netTcpBindings " (using the default config ... no special
    adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
    a
    call to our server, which invokes a stored procedure and returns the query
    result. At this point we take the rows in the result set and "package"
    them
    into a hashtable, then return the hashtable to the calling client.
    >
    Our original exception was a time-out exception, but with some
    experimentation we've learned that wasn't the problem .... although it is
    getting reported that way. Turns out it was the amount of data.
    >
    The query should return ~11,000 records from the database. From our
    experimentation we've noticed we can only return 95 of the rows back
    before
    we throw a "exceded buffer size" exception. Using the default values in
    our
    app.config file that size is 65,536.
    >
    Not that moving 11,000 records is smart, but to be limited to only 64Kb in
    a
    communication seems overly restrictive. We can change the value from the
    default, but I wanted to ask what other's are doing to work with larger
    amounts of data with WCF first?
    >
    Are you simply "turning up" the size of the buffer size? Some kind of
    paging technique? Some other strategy?? Having a tough time finding
    answers
    on this.
    >
    Greatly appreciate any and all comments on this,
    >
    Thanks
    --
    Stay Mobile

    Comment

    • =?Utf-8?B?TW9iaWxlTWFu?=

      #3
      Re: Best practices for moving large amounts of data using WCF ...

      Dear "msgroup":

      Thanks for the sales pitch .....

      Actually, we're interested in best practices as it pertains to WCF. We've
      been doing socket-level programming for far too long - that's the point in
      moving up to a higher level abstraction isn't it??

      Sounds like a nice product, though.
      --
      Stay Mobile


      "msgroup" wrote:
      Hi, MobileMan:
      >
      You get problems for moving large amounts of data? Our SocketPro at
      www.udaparts.com solves this problem completely with very simple and elegant
      way, non-blocking socket communication. SocketPro is a package of
      revolutionary software components written from batching, asynchrony and
      parallel computation with many attractive and critical features to help you
      easily and quickly develop secured internet-enabled distributed applications
      running on all of window platforms and smart devices with super performance
      and scalability.
      >
      See the attached tutorial three. Let me give you some code here.
      >
      protected void GetManyItems()
      >
      {
      >
      int nRtn = 0;
      >
      m_UQueue.SetSiz e(0);
      >
      PushNullExcepti on();
      >
      while (m_Stack.Count 0)
      >
      {
      >
      //a client may either shut down the socket
      connection or call IUSocket::Cance l
      >
      if (nRtn == SOCKET_NOT_FOUN D || nRtn ==
      REQUEST_CANCELE D)
      >
      break;
      >
      CTestItem Item = (CTestItem)m_St ack.Pop();
      >
      Item.SaveTo(m_U Queue);
      >
      //20 kbytes per batch at least
      >
      //also shouldn't be too large.
      >
      //If the size is too large, it will cost
      more memory resource and reduce conccurency if online compressing is
      enabled.
      >
      //for an opimal value, you'd better test it
      by yourself
      >
      if (m_UQueue.GetSi ze() 20480)
      >
      {
      >
      nRtn =
      SendReturnData( TThreeConst.idG etBatchItemsCTT hree, m_UQueue);
      >
      m_UQueue.SetSiz e(0);
      >
      PushNullExcepti on();
      >
      }
      >
      }
      >
      if (nRtn == SOCKET_NOT_FOUN D || nRtn == REQUEST_CANCELE D)
      >
      {
      >
      }
      >
      else if (m_UQueue.GetSi ze() sizeof(int))
      >
      {
      >
      nRtn =
      SendReturnData( TThreeConst.idG etBatchItemsCTT hree, m_UQueue);
      >
      }
      >
      }
      >
      There are a lot of samples inside our SocketPro to demonstrate how to
      move a lot of database records, large files, a large collection of items
      across machines. See the site

      >
      >
      >
      >
      >
      "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
      news:EAB34BEF-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
      Hello everyone:

      I am looking for everyone's thoughts on moving large amounts (actually,
      not
      very large, but large enough that I'm throwing exceptions using the
      default
      configurations) .

      We're doing a proof-of-concept on WCF whereby we have a Windows form
      client
      and a Server. Our server is a middle-tier that interfaces with our SQL 05
      database server.

      Using the "netTcpBindings " (using the default config ... no special
      adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
      a
      call to our server, which invokes a stored procedure and returns the query
      result. At this point we take the rows in the result set and "package"
      them
      into a hashtable, then return the hashtable to the calling client.

      Our original exception was a time-out exception, but with some
      experimentation we've learned that wasn't the problem .... although it is
      getting reported that way. Turns out it was the amount of data.

      The query should return ~11,000 records from the database. From our
      experimentation we've noticed we can only return 95 of the rows back
      before
      we throw a "exceded buffer size" exception. Using the default values in
      our
      app.config file that size is 65,536.

      Not that moving 11,000 records is smart, but to be limited to only 64Kb in
      a
      communication seems overly restrictive. We can change the value from the
      default, but I wanted to ask what other's are doing to work with larger
      amounts of data with WCF first?

      Are you simply "turning up" the size of the buffer size? Some kind of
      paging technique? Some other strategy?? Having a tough time finding
      answers
      on this.

      Greatly appreciate any and all comments on this,

      Thanks
      --
      Stay Mobile
      >
      >
      >

      Comment

      • RobinS

        #4
        Re: Best practices for moving large amounts of data using WCF ...

        Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
        WCF. You could always check out his web site and see if there's any contact
        info. I saw him talk about WCF yesterday at the Vista Launch in SF. Pretty
        cool stuff. http://www.idesign.net

        Good luck.

        Robin S.
        -------------------------------------------
        "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
        news:EAB34BEF-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
        Hello everyone:
        >
        I am looking for everyone's thoughts on moving large amounts (actually,
        not
        very large, but large enough that I'm throwing exceptions using the
        default
        configurations) .
        >
        We're doing a proof-of-concept on WCF whereby we have a Windows form
        client
        and a Server. Our server is a middle-tier that interfaces with our SQL
        05
        database server.
        >
        Using the "netTcpBindings " (using the default config ... no special
        adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
        a
        call to our server, which invokes a stored procedure and returns the
        query
        result. At this point we take the rows in the result set and "package"
        them
        into a hashtable, then return the hashtable to the calling client.
        >
        Our original exception was a time-out exception, but with some
        experimentation we've learned that wasn't the problem .... although it is
        getting reported that way. Turns out it was the amount of data.
        >
        The query should return ~11,000 records from the database. From our
        experimentation we've noticed we can only return 95 of the rows back
        before
        we throw a "exceded buffer size" exception. Using the default values in
        our
        app.config file that size is 65,536.
        >
        Not that moving 11,000 records is smart, but to be limited to only 64Kb
        in a
        communication seems overly restrictive. We can change the value from the
        default, but I wanted to ask what other's are doing to work with larger
        amounts of data with WCF first?
        >
        Are you simply "turning up" the size of the buffer size? Some kind of
        paging technique? Some other strategy?? Having a tough time finding
        answers
        on this.
        >
        Greatly appreciate any and all comments on this,
        >
        Thanks
        --
        Stay Mobile

        Comment

        • =?Utf-8?B?TW9iaWxlTWFu?=

          #5
          Re: Best practices for moving large amounts of data using WCF ...

          Yea, we've seen their site ... some really good stuff. We're pretty new to
          all this, but I haven't seen anybody else who seems to be as "fluent" as they
          are. They've obviously put in some serious time on the subject to come up
          with all that.

          Wish I was there to see him speak too.

          From what we've gathered so far (which admitedly isn't much ... not a lot of
          people doing this) the way to handle this is using stream-based connections
          instead of using buffered - the default. We could go through and change some
          of the settings in the config file, but the issue would be did you make the
          buffer / max message size "big enough" to handle all possibilities?

          I'm not sure just how much of an issue it would be, but conceptually the
          idea of taking a setting that defaults at 64Kb and changing it to something
          like 75MB, 150MB, or even larger ... just to handle those few-and-far-between
          situations that only come up once in a blue moon ... seems wrong somehow.
          Dont' get me wrong, though, if that is really the best way to handle this
          then we'll be changing the settings! We'd love to hear from someone who's
          really using WCF - using large amount of data - and the strategy they've
          employed.

          I'll drop a note to Juval and maybe get lucky. No matter what, I'll
          post-back and let you know what we went with and what the "real world"
          results work out like. WCF seems to hold A LOT of promise .... we all just
          need more communication about it.

          Thanks Robin.

          --
          Stay Mobile


          "RobinS" wrote:
          Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
          WCF. You could always check out his web site and see if there's any contact
          info. I saw him talk about WCF yesterday at the Vista Launch in SF. Pretty
          cool stuff. http://www.idesign.net
          >
          Good luck.
          >
          Robin S.
          -------------------------------------------
          "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
          news:EAB34BEF-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
          Hello everyone:

          I am looking for everyone's thoughts on moving large amounts (actually,
          not
          very large, but large enough that I'm throwing exceptions using the
          default
          configurations) .

          We're doing a proof-of-concept on WCF whereby we have a Windows form
          client
          and a Server. Our server is a middle-tier that interfaces with our SQL
          05
          database server.

          Using the "netTcpBindings " (using the default config ... no special
          adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
          a
          call to our server, which invokes a stored procedure and returns the
          query
          result. At this point we take the rows in the result set and "package"
          them
          into a hashtable, then return the hashtable to the calling client.

          Our original exception was a time-out exception, but with some
          experimentation we've learned that wasn't the problem .... although it is
          getting reported that way. Turns out it was the amount of data.

          The query should return ~11,000 records from the database. From our
          experimentation we've noticed we can only return 95 of the rows back
          before
          we throw a "exceded buffer size" exception. Using the default values in
          our
          app.config file that size is 65,536.

          Not that moving 11,000 records is smart, but to be limited to only 64Kb
          in a
          communication seems overly restrictive. We can change the value from the
          default, but I wanted to ask what other's are doing to work with larger
          amounts of data with WCF first?

          Are you simply "turning up" the size of the buffer size? Some kind of
          paging technique? Some other strategy?? Having a tough time finding
          answers
          on this.

          Greatly appreciate any and all comments on this,

          Thanks
          --
          Stay Mobile
          >
          >
          >

          Comment

          • RobinS

            #6
            Re: Best practices for moving large amounts of data using WCF ...

            They're not just "fluent". Like I said before, Juval actually helped MS
            design the WCF stuff. That kind of takes fluent to a whole new level. ;-)
            I haven't used it, so unfortunately, I can't help you specifically.

            However, here's something that should help. There is a newsgroup
            specifically for WCF. WCF used to be called Indigo before the Marketing
            people got their claws into it. So I recommend that you post your query to
            this newsgroup:

            microsoft.publi c.windows.devel oper.winfx.indi go

            Someone there can probably be very helpful.

            Good luck.
            Robin S.
            -------------------------------------------------
            "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
            news:BFF0C2D6-9BDF-4982-BE74-10C9FFC3FBAA@mi crosoft.com...
            Yea, we've seen their site ... some really good stuff. We're pretty new
            to
            all this, but I haven't seen anybody else who seems to be as "fluent" as
            they
            are. They've obviously put in some serious time on the subject to come
            up
            with all that.
            >
            Wish I was there to see him speak too.
            >
            From what we've gathered so far (which admitedly isn't much ... not a lot
            of
            people doing this) the way to handle this is using stream-based
            connections
            instead of using buffered - the default. We could go through and change
            some
            of the settings in the config file, but the issue would be did you make
            the
            buffer / max message size "big enough" to handle all possibilities?
            >
            I'm not sure just how much of an issue it would be, but conceptually the
            idea of taking a setting that defaults at 64Kb and changing it to
            something
            like 75MB, 150MB, or even larger ... just to handle those
            few-and-far-between
            situations that only come up once in a blue moon ... seems wrong somehow.
            Dont' get me wrong, though, if that is really the best way to handle this
            then we'll be changing the settings! We'd love to hear from someone
            who's
            really using WCF - using large amount of data - and the strategy they've
            employed.
            >
            I'll drop a note to Juval and maybe get lucky. No matter what, I'll
            post-back and let you know what we went with and what the "real world"
            results work out like. WCF seems to hold A LOT of promise .... we all
            just
            need more communication about it.
            >
            Thanks Robin.
            >
            --
            Stay Mobile
            >
            >
            "RobinS" wrote:
            >
            >Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
            >WCF. You could always check out his web site and see if there's any
            >contact
            >info. I saw him talk about WCF yesterday at the Vista Launch in SF.
            >Pretty
            >cool stuff. http://www.idesign.net
            >>
            >Good luck.
            >>
            >Robin S.
            >-------------------------------------------
            >"MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
            >news:EAB34BE F-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
            Hello everyone:
            >
            I am looking for everyone's thoughts on moving large amounts
            (actually,
            not
            very large, but large enough that I'm throwing exceptions using the
            default
            configurations) .
            >
            We're doing a proof-of-concept on WCF whereby we have a Windows form
            client
            and a Server. Our server is a middle-tier that interfaces with our
            SQL
            05
            database server.
            >
            Using the "netTcpBindings " (using the default config ... no special
            adjustments to buffer size, buffer pool size, etc., etc.) we are
            invoking
            a
            call to our server, which invokes a stored procedure and returns the
            query
            result. At this point we take the rows in the result set and
            "package"
            them
            into a hashtable, then return the hashtable to the calling client.
            >
            Our original exception was a time-out exception, but with some
            experimentation we've learned that wasn't the problem .... although it
            is
            getting reported that way. Turns out it was the amount of data.
            >
            The query should return ~11,000 records from the database. From our
            experimentation we've noticed we can only return 95 of the rows back
            before
            we throw a "exceded buffer size" exception. Using the default values
            in
            our
            app.config file that size is 65,536.
            >
            Not that moving 11,000 records is smart, but to be limited to only
            64Kb
            in a
            communication seems overly restrictive. We can change the value from
            the
            default, but I wanted to ask what other's are doing to work with
            larger
            amounts of data with WCF first?
            >
            Are you simply "turning up" the size of the buffer size? Some kind of
            paging technique? Some other strategy?? Having a tough time finding
            answers
            on this.
            >
            Greatly appreciate any and all comments on this,
            >
            Thanks
            --
            Stay Mobile
            >>
            >>
            >>

            Comment

            • =?Utf-8?B?TW9iaWxlTWFu?=

              #7
              Re: Best practices for moving large amounts of data using WCF ...

              Bravo!

              --
              Stay Mobile


              "RobinS" wrote:
              They're not just "fluent". Like I said before, Juval actually helped MS
              design the WCF stuff. That kind of takes fluent to a whole new level. ;-)
              I haven't used it, so unfortunately, I can't help you specifically.
              >
              However, here's something that should help. There is a newsgroup
              specifically for WCF. WCF used to be called Indigo before the Marketing
              people got their claws into it. So I recommend that you post your query to
              this newsgroup:
              >
              microsoft.publi c.windows.devel oper.winfx.indi go
              >
              Someone there can probably be very helpful.
              >
              Good luck.
              Robin S.
              -------------------------------------------------
              "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
              news:BFF0C2D6-9BDF-4982-BE74-10C9FFC3FBAA@mi crosoft.com...
              Yea, we've seen their site ... some really good stuff. We're pretty new
              to
              all this, but I haven't seen anybody else who seems to be as "fluent" as
              they
              are. They've obviously put in some serious time on the subject to come
              up
              with all that.

              Wish I was there to see him speak too.

              From what we've gathered so far (which admitedly isn't much ... not a lot
              of
              people doing this) the way to handle this is using stream-based
              connections
              instead of using buffered - the default. We could go through and change
              some
              of the settings in the config file, but the issue would be did you make
              the
              buffer / max message size "big enough" to handle all possibilities?

              I'm not sure just how much of an issue it would be, but conceptually the
              idea of taking a setting that defaults at 64Kb and changing it to
              something
              like 75MB, 150MB, or even larger ... just to handle those
              few-and-far-between
              situations that only come up once in a blue moon ... seems wrong somehow.
              Dont' get me wrong, though, if that is really the best way to handle this
              then we'll be changing the settings! We'd love to hear from someone
              who's
              really using WCF - using large amount of data - and the strategy they've
              employed.

              I'll drop a note to Juval and maybe get lucky. No matter what, I'll
              post-back and let you know what we went with and what the "real world"
              results work out like. WCF seems to hold A LOT of promise .... we all
              just
              need more communication about it.

              Thanks Robin.

              --
              Stay Mobile


              "RobinS" wrote:
              Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
              WCF. You could always check out his web site and see if there's any
              contact
              info. I saw him talk about WCF yesterday at the Vista Launch in SF.
              Pretty
              cool stuff. http://www.idesign.net
              >
              Good luck.
              >
              Robin S.
              -------------------------------------------
              "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
              news:EAB34BEF-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
              Hello everyone:

              I am looking for everyone's thoughts on moving large amounts
              (actually,
              not
              very large, but large enough that I'm throwing exceptions using the
              default
              configurations) .

              We're doing a proof-of-concept on WCF whereby we have a Windows form
              client
              and a Server. Our server is a middle-tier that interfaces with our
              SQL
              05
              database server.

              Using the "netTcpBindings " (using the default config ... no special
              adjustments to buffer size, buffer pool size, etc., etc.) we are
              invoking
              a
              call to our server, which invokes a stored procedure and returns the
              query
              result. At this point we take the rows in the result set and
              "package"
              them
              into a hashtable, then return the hashtable to the calling client.

              Our original exception was a time-out exception, but with some
              experimentation we've learned that wasn't the problem .... although it
              is
              getting reported that way. Turns out it was the amount of data.

              The query should return ~11,000 records from the database. From our
              experimentation we've noticed we can only return 95 of the rows back
              before
              we throw a "exceded buffer size" exception. Using the default values
              in
              our
              app.config file that size is 65,536.

              Not that moving 11,000 records is smart, but to be limited to only
              64Kb
              in a
              communication seems overly restrictive. We can change the value from
              the
              default, but I wanted to ask what other's are doing to work with
              larger
              amounts of data with WCF first?

              Are you simply "turning up" the size of the buffer size? Some kind of
              paging technique? Some other strategy?? Having a tough time finding
              answers
              on this.

              Greatly appreciate any and all comments on this,

              Thanks
              --
              Stay Mobile
              >
              >
              >
              >
              >
              >

              Comment

              • msgroup

                #8
                Re: Best practices for moving large amounts of data using WCF ...

                Hi, All:

                See the site at
                http://www.udaparts.com/document/Tut...orialThree.htm for how to move
                large size files, large record set, large collection of items and large
                whatever by our SocketPro at www.udaparts.com.

                We see a lot of semilar problems posted on various discussion groups and
                web sites. Let me tell you our SocketPro is able to solve this type of
                challenge problems with much more elegant and simpler codes. This tutorial
                sample is a good testmony to the quality of our SocketPro. You can also see
                our source codes for our remote window file and database services.

                We publish this message for helping you and also for advertisement on
                internet. Our SocketPro is able to solve many many challenge problems in our
                daily programming in unique way, batching, asynchrony and parallel
                computation.

                Regards,

                "MobileMan" <MobileMan@disc ussions.microso ft.comwrote in message
                news:EAB34BEF-2C30-4AA6-A447-6019617373E7@mi crosoft.com...
                Hello everyone:
                >
                I am looking for everyone's thoughts on moving large amounts (actually,
                not
                very large, but large enough that I'm throwing exceptions using the
                default
                configurations) .
                >
                We're doing a proof-of-concept on WCF whereby we have a Windows form
                client
                and a Server. Our server is a middle-tier that interfaces with our SQL 05
                database server.
                >
                Using the "netTcpBindings " (using the default config ... no special
                adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
                a
                call to our server, which invokes a stored procedure and returns the query
                result. At this point we take the rows in the result set and "package"
                them
                into a hashtable, then return the hashtable to the calling client.
                >
                Our original exception was a time-out exception, but with some
                experimentation we've learned that wasn't the problem .... although it is
                getting reported that way. Turns out it was the amount of data.
                >
                The query should return ~11,000 records from the database. From our
                experimentation we've noticed we can only return 95 of the rows back
                before
                we throw a "exceded buffer size" exception. Using the default values in
                our
                app.config file that size is 65,536.
                >
                Not that moving 11,000 records is smart, but to be limited to only 64Kb in
                a
                communication seems overly restrictive. We can change the value from the
                default, but I wanted to ask what other's are doing to work with larger
                amounts of data with WCF first?
                >
                Are you simply "turning up" the size of the buffer size? Some kind of
                paging technique? Some other strategy?? Having a tough time finding
                answers
                on this.
                >
                Greatly appreciate any and all comments on this,
                >
                Thanks
                --
                Stay Mobile

                Comment

                Working...