yipeee!

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Joe Weinstein

    #46
    Re: yipeee!

    Daniel Morgan wrote:
    [color=blue]
    > Niall Litchfield wrote:
    >[color=green]
    >> "Rob Cowell" <rjc4687@hotmai l.com> wrote in message
    >> news:40211355.9 4A43A97@hotmail .com...
    >>[color=darkred]
    >>>> And that I thought Oracle had some facility
    >>>> such that we could cluster and load-balance two+ nodes running
    >>>> something like Parallel Oracle so that should a node need booting
    >>>> or modifying off-line the application is still running (at
    >>>> reduced capacity) for the users.
    >>>
    >>> Oracle Real Application Clusters[/color]
    >>
    >> http://tahiti.oracle.com/pls/db92/db...ation+clusters
    >>
    >> Just to add to that, if you are writing the app from scratch RAC also
    >> supports a technology (transparent application failover) where users
    >> connected to a node that fails will failover to a running node without
    >> dataloss and without being disconnected from the application. I am
    >> unsure if
    >> IBM have a similar technology[/color]
    >
    > Only on mainframes. With shared nothing if you lose a node ... the
    > storage associated with that node is lost too.[/color]

    Yep. Also, I really don't want to sound like I'm picking only on Oracle,
    because I complain about other DBMSes too. Oracle's TAF fooled a number
    of customers into believing it really was Transparent Application Failover,
    but it seems to be so only for certain mostly-idle clients. The reason I
    say this is because while there is no data loss during a failover, nor
    even any transactional context (locks), what is lost is any *computational*
    context that the client may be relying on if it was actually doing
    something when the failover occurred. For instance, most cursor context
    is lost. Java clients that may have created and are re-using Prepared
    Statements will find that all those prepared statements are now defunct,
    and must be recreated before the client can even retry what they were doing.
    This generally means returning to the line of code right after obtaining the
    original connection. Having the connection automatically failover to an
    appropriate backup DBMS is certainly valuable, but calling it "TAF" was
    'aiming high' in the marketing department, IMHO.
    Joe

    Comment

    • Daniel Morgan

      #47
      Re: yipeee!

      Joe Weinstein wrote:
      [color=blue]
      > Yep. Also, I really don't want to sound like I'm picking only on Oracle,
      > because I complain about other DBMSes too. Oracle's TAF fooled a number
      > of customers into believing it really was Transparent Application Failover,
      > but it seems to be so only for certain mostly-idle clients. The reason I
      > say this is because while there is no data loss during a failover, nor
      > even any transactional context (locks), what is lost is any *computational*
      > context that the client may be relying on if it was actually doing
      > something when the failover occurred. For instance, most cursor context
      > is lost. Java clients that may have created and are re-using Prepared
      > Statements will find that all those prepared statements are now defunct,
      > and must be recreated before the client can even retry what they were
      > doing.
      > This generally means returning to the line of code right after obtaining
      > the
      > original connection. Having the connection automatically failover to an
      > appropriate backup DBMS is certainly valuable, but calling it "TAF" was
      > 'aiming high' in the marketing department, IMHO.
      > Joe[/color]

      From the client's standpoint it is completely transparent which is the
      origin of the name.

      Perhaps you need to come take the class I teach on RAC.

      --
      Daniel Morgan
      We make it possible for you to keep learning at the University of Washington, even if you work full time or live outside of the Seattle area.

      We make it possible for you to keep learning at the University of Washington, even if you work full time or live outside of the Seattle area.

      damorgan@x.wash ington.edu
      (replace 'x' with a 'u' to reply)

      Comment

      • Holger Baer

        #48
        Re: yipeee!

        Daniel Morgan wrote:[color=blue]
        > Joe Weinstein wrote:
        >[color=green]
        >> Yep. Also, I really don't want to sound like I'm picking only on Oracle,
        >> because I complain about other DBMSes too. Oracle's TAF fooled a number
        >> of customers into believing it really was Transparent Application
        >> Failover,
        >> but it seems to be so only for certain mostly-idle clients. The reason I
        >> say this is because while there is no data loss during a failover, nor
        >> even any transactional context (locks), what is lost is any
        >> *computational*
        >> context that the client may be relying on if it was actually doing
        >> something when the failover occurred. For instance, most cursor context
        >> is lost. Java clients that may have created and are re-using Prepared
        >> Statements will find that all those prepared statements are now defunct,
        >> and must be recreated before the client can even retry what they were
        >> doing.
        >> This generally means returning to the line of code right after
        >> obtaining the
        >> original connection. Having the connection automatically failover to an
        >> appropriate backup DBMS is certainly valuable, but calling it "TAF" was
        >> 'aiming high' in the marketing department, IMHO.
        >> Joe[/color]
        >
        >
        > From the client's standpoint it is completely transparent which is the
        > origin of the name.
        >
        > Perhaps you need to come take the class I teach on RAC.
        >[/color]

        I guess it's a question how you define 'completely transparent'. Of course,
        the client has to do nothing but sit there and wait until the connection
        is failed over. However, one might argue that just having lost my running
        transaction (insert, update, delete) and all my package states is all that
        transparent.
        But I'm still impressed how the new node is able to pick up a running query
        and continue at the same point where the failover occured - since a resultset
        per definition has no order (except when specified) I'm still wondering how
        the node I failed over to is able to bypass that restriction. It was a question
        the teacher of my RAC course couldn't quite answer, but then maybe he just
        knew so much more of the internal mechanics that he needn't to wonder ;-)

        Cheer up, that cast on your leg won't last forever ;-)

        Holger

        Comment

        • Daniel Morgan

          #49
          Re: yipeee!

          Holger Baer wrote:

          [color=blue]
          > I'm still wondering how
          > the node I failed over to is able to bypass that restriction. It was a
          > question
          > the teacher of my RAC course couldn't quite answer, but then maybe he just
          > knew so much more of the internal mechanics that he needn't to wonder ;-)
          >
          > Cheer up, that cast on your leg won't last forever ;-)
          >
          > Holger[/color]

          Thanks ... just 11 more days.

          The way it is done is through something a combination of shared
          everything ... meaning that each node sees the same database files, the
          same data, and can also see the same redo logs and rollback segements.
          Thus if one node fails ... any other node can see everything on disk
          that was in use by any other node. Add to that cache fusion ... the
          ability to pass memory caches between nodes and the only thing one node
          needs to know about another node to fail-over is what failed and where.
          We routinely see restarts in the <60 second range.

          --
          Daniel Morgan
          We make it possible for you to keep learning at the University of Washington, even if you work full time or live outside of the Seattle area.

          We make it possible for you to keep learning at the University of Washington, even if you work full time or live outside of the Seattle area.

          damorgan@x.wash ington.edu
          (replace 'x' with a 'u' to reply)

          Comment

          • Database Guy

            #50
            Re: yipeee!

            bucknuggets@yah oo.com (Buck Nuggets) wrote in message news:<66a61715. 0402050632.542b d984@posting.go ogle.com>...[color=blue]
            > Daniel Morgan <damorgan@x.was hington.edu> wrote:[color=green]
            > > My point being that if I made one change to the benchmark ... say added
            > > the following ... "one hour into the test pull the plug on one node and
            > > complete the job" ... shared nothing and federated databases wouldn't
            > > even be able to compete.[/color]
            >
            > Kind of neat how as soon as product a has a new feature its apologists
            > declare it the essential and distinguishing feature in the
            > marketplace..
            >
            > Now, if it works as smoothly as you describe - that's great, and I'll
            > look forward to using it. On the other hand, in my experience with
            > MPP databases (primarily Informix on AIX using SP2), I could go a year
            > without having to reboot any nodes.[/color]

            Daniel seems to think otherwise. He clearly feels that Oracle's
            allegedly faster recover from node failures outweighs its
            less-than-half-speed RAC performance under BAU circumstances (i.e.
            nodes working). I can't understand why Oracle nodes crash so much, but
            it's not a product I know well. Hopefully someone else will explain.


            DG

            Comment

            • Mike

              #51
              Re: yipeee!

              In article <bvtfmn$a5e$1@h anover.torolab. ibm.com>, Serge Rielau wrote:[color=blue]
              > OK, so you need HA. What else?
              > Any specific SQL Features?
              > What's your companies strategie for App development? Java, .Net, both?
              > Any OS/Hardware preference?
              > What's you companies DB skillset in house?
              >
              > Cheers
              > Serge[/color]

              Since the current system is in VSAM, then no specific SQL features are
              mentioned. The app development is now performed on this system in
              COBOL, though over time I expect it to be C. The unix far is all AIX.
              We have a DBA and several other people with RDBMS experience.

              Mike

              Comment

              • Mike

                #52
                Re: yipeee!

                In article <bvtjk7$bvp$1@h anover.torolab. ibm.com>, Serge Rielau wrote:[color=blue]
                > My DB2 "offer" would be DB2 without DPF on two AIX boxes (OP wants AIX
                > it seems). The seoncd box licenced as idle standby only (1 CPU).
                > With clusterware to handle the failover.
                > This is under the assumption that a rewrite of the app to a relational
                > DBMS is intended.
                > There is not enough information to home in on which edition or box-size
                > to home in to.
                >
                > It seems Mark A. believes CICS would be less invasive. I'm not familiar
                > with either VSE or CICS so I keep my mouth shut.
                >
                > Let's presume 100% scalability for RAC (if you want to use it) for the
                > sake of math (and to not start another flame war) and similar resource
                > requirements (AIX, RAM/box, comparable disk overall).
                >
                > Your turn
                > Serge[/color]

                Yes, AIX and I'm thinking of 2-4 p650 with 8 cpu and 8gb running disk
                off an ESS (shark). Instead of HACMP I'm looking for something where I
                can take a node out of the 'cluster' for maintenance, etc., then
                re-insert it back into the cluster and it will 'catch up' on the changes
                it missed.

                For a phased approach I'm thinking
                - move entirely to CICS/AIX
                - move the VSAM to DB2 on the AIX box
                - use the CICS interfaces for the existing programs to reach DB2
                outside of CICS
                - over time re-write the CICS applications to run native AIX

                Comment

                • Daniel Morgan

                  #53
                  Re: yipeee!

                  Database Guy wrote:
                  [color=blue]
                  > Daniel seems to think otherwise. He clearly feels that Oracle's
                  > allegedly faster recover from node failures outweighs its
                  > less-than-half-speed RAC performance under BAU circumstances (i.e.
                  > nodes working). I can't understand why Oracle nodes crash so much, but
                  > it's not a product I know well. Hopefully someone else will explain.
                  >
                  >
                  > DG[/color]

                  I don't think nodes crash often: But all hardware, all operating
                  systems, all platforms, and all software does have problems from
                  time-to-time. Reading your post someone with little or no experience
                  might be tempted to believe that somehow one vendor's RDBMS is more
                  likely to cause a CPU to die than another's: Pure nonsense. All machines
                  lose CPUs. All machines lose RAM. Computers are not perpetual motion
                  machines. And downtime has a real cost in $.

                  If you truly believe that hardware never crashes I presume you also
                  don't do nightly backups. In other words ... thanks for the hyperbole.

                  And if you truly believe that in the real-world RAC scaling at 128 nodes
                  gives less than 50% of the performance of shared nothing scaling at 128
                  nodes I have some stocks and bonds I'd like to sell you.

                  --
                  Daniel Morgan
                  We make it possible for you to keep learning at the University of Washington, even if you work full time or live outside of the Seattle area.

                  We make it possible for you to keep learning at the University of Washington, even if you work full time or live outside of the Seattle area.

                  damorgan@x.wash ington.edu
                  (replace 'x' with a 'u' to reply)

                  Comment

                  • dba

                    #54
                    Re: yipeee!

                    Then where are the benchmarks and real customer references to prove it?
                    Where are the examples of this wonderful technology displacing NCR and
                    IBM shared nothing implementations because it scales better?

                    I'll take the stocks and bonds.

                    DBA

                    Daniel Morgan wrote:[color=blue]
                    > Database Guy wrote:
                    >[color=green]
                    >> Daniel seems to think otherwise. He clearly feels that Oracle's
                    >> allegedly faster recover from node failures outweighs its
                    >> less-than-half-speed RAC performance under BAU circumstances (i.e.
                    >> nodes working). I can't understand why Oracle nodes crash so much, but
                    >> it's not a product I know well. Hopefully someone else will explain.
                    >>
                    >>
                    >> DG[/color]
                    >
                    >
                    > I don't think nodes crash often: But all hardware, all operating
                    > systems, all platforms, and all software does have problems from
                    > time-to-time. Reading your post someone with little or no experience
                    > might be tempted to believe that somehow one vendor's RDBMS is more
                    > likely to cause a CPU to die than another's: Pure nonsense. All machines
                    > lose CPUs. All machines lose RAM. Computers are not perpetual motion
                    > machines. And downtime has a real cost in $.
                    >
                    > If you truly believe that hardware never crashes I presume you also
                    > don't do nightly backups. In other words ... thanks for the hyperbole.
                    >
                    > And if you truly believe that in the real-world RAC scaling at 128 nodes
                    > gives less than 50% of the performance of shared nothing scaling at 128
                    > nodes I have some stocks and bonds I'd like to sell you.
                    >[/color]

                    Comment

                    • Mark Townsend

                      #55
                      Re: yipeee!

                      Sigh.

                      Check out the #1 result at
                      The Transaction Processing Performance Council (TPC) defines Transaction Processing and Database Benchmarks and delivers trusted results to the industry.

                      For RAC references go to


                      Did you actually even attempt to look for yourself ?

                      dba wrote:
                      [color=blue]
                      > Then where are the benchmarks and real customer references to prove
                      > it? Where are the examples of this wonderful technology displacing NCR
                      > and IBM shared nothing implementations because it scales better?
                      >
                      > I'll take the stocks and bonds.
                      >
                      > DBA
                      >
                      > Daniel Morgan wrote:
                      >[color=green]
                      >> Database Guy wrote:
                      >>[color=darkred]
                      >>> Daniel seems to think otherwise. He clearly feels that Oracle's
                      >>> allegedly faster recover from node failures outweighs its
                      >>> less-than-half-speed RAC performance under BAU circumstances (i.e.
                      >>> nodes working). I can't understand why Oracle nodes crash so much, but
                      >>> it's not a product I know well. Hopefully someone else will explain.
                      >>>
                      >>>
                      >>> DG[/color]
                      >>
                      >>
                      >>
                      >> I don't think nodes crash often: But all hardware, all operating
                      >> systems, all platforms, and all software does have problems from
                      >> time-to-time. Reading your post someone with little or no experience
                      >> might be tempted to believe that somehow one vendor's RDBMS is more
                      >> likely to cause a CPU to die than another's: Pure nonsense. All
                      >> machines lose CPUs. All machines lose RAM. Computers are not
                      >> perpetual motion
                      >> machines. And downtime has a real cost in $.
                      >>
                      >> If you truly believe that hardware never crashes I presume you also
                      >> don't do nightly backups. In other words ... thanks for the hyperbole.
                      >>
                      >> And if you truly believe that in the real-world RAC scaling at 128 nodes
                      >> gives less than 50% of the performance of shared nothing scaling at 128
                      >> nodes I have some stocks and bonds I'd like to sell you.
                      >>[/color]
                      >[/color]

                      Comment

                      • Mark A

                        #56
                        Re: yipeee!

                        "Mark Townsend" <mark.townsend@ oracle.com> wrote in message
                        news:4022F0B2.6 080206@oracle.c om...[color=blue]
                        > Sigh.
                        >
                        > Check out the #1 result at
                        > http://www.tpc.org/tpcc/results/tpcc_perf_results.asp
                        > For RAC references go to
                        >[/color]
                        http://www.oracle.com/ultrasearch/ww...=7&p_Query=RAC[color=blue]
                        >
                        > Did you actually even attempt to look for yourself ?
                        >[/color]
                        Sigh. Again, just like with Daniel, we have failure here to understand the
                        difference between a data warehouse application and an OLTP application.

                        The TPC-C benchmarks referred to above are for OLTP, which does not benefit
                        from parallel database access to support complex decision support queries.
                        What the TPC-C benchmark measures is the ability to process multiple
                        transactions at one time, but each transaction is of an OLTP nature
                        efficiently access small portions of each table.

                        That is completely different from the TPC-H Benchmark that measures the
                        ability to process a small number of queries that typically access data from
                        an entire table (or tables), and benefit by accessing that data in parallel.
                        TPC-H (formally TPC-D) is why parallel databases (and specifically share
                        nothing parallel databases) were invented.

                        The weakness and lack of LINEAR scalability of the Oracle implementation
                        does not show in an OLTP environment such as TPC-C, regardless the number of
                        simultaneous users that are accessing the data a one time. Of course for the
                        person that started this thread, their primary interest (and maybe only
                        interest) is in OLTP processing with failover capabilities.

                        But there are much bigger fish to fry regarding the existing COBOL, CICS,
                        VSAM, etc.application before one worries about the technical merits of
                        Oracle vs. DB2.


                        Comment

                        • Serge Rielau

                          #57
                          Re: yipeee!

                          Ask your favorite IBM Rep about "HADR".

                          Cheers
                          serge
                          --
                          Serge Rielau
                          DB2 SQL Compiler Development
                          IBM Toronto Lab

                          Comment

                          • Mark Townsend

                            #58
                            Re: yipeee!

                            Mark A wrote:[color=blue]
                            > "Mark Townsend" <mark.townsend@ oracle.com> wrote in message
                            > news:4022F0B2.6 080206@oracle.c om...
                            >[color=green]
                            >>Sigh.
                            >>
                            >>Check out the #1 result at
                            >>http://www.tpc.org/tpcc/results/tpcc_perf_results.asp
                            >>For RAC references go to
                            >>[/color]
                            >
                            > http://www.oracle.com/ultrasearch/ww...=7&p_Query=RAC
                            >[color=green]
                            >>Did you actually even attempt to look for yourself ?
                            >>[/color]
                            >
                            > Sigh. Again, just like with Daniel, we have failure here to understand the
                            > difference between a data warehouse application and an OLTP application.[/color]

                            I understand perfectly. I am not confused. I know exactly what each
                            product is capable off, down to the nth degree. It's my job to know, and
                            I'm very,very good at my job.
                            [color=blue]
                            >
                            > The TPC-C benchmarks referred to above are for OLTP, which does not benefit
                            > from parallel database access to support complex decision support queries.[/color]

                            Exactly right. Still no confusion here
                            [color=blue]
                            > What the TPC-C benchmark measures is the ability to process multiple
                            > transactions at one time, but each transaction is of an OLTP nature
                            > efficiently access small portions of each table.[/color]

                            Exactly right. Still no confusion here
                            [color=blue]
                            > That is completely different from the TPC-H Benchmark that measures the
                            > ability to process a small number of queries that typically access data from
                            > an entire table (or tables), and benefit by accessing that data in parallel.[/color]

                            Right again. Still no confusion here
                            [color=blue]
                            > TPC-H (formally TPC-D) is why parallel databases (and specifically share
                            > nothing parallel databases) were invented.[/color]

                            Ok - here is where it breaks down. Shared nothing parallelism al la UDB
                            and Teradata is just one way to do TPC-H. I believe that you yourself
                            identified that big SMP is also now a completely viable approach to
                            TPC-H (and the TPC-H benchmarks and Richard Winter VLDB survey also tend
                            to show this). Anyhow, this is taking us even further off-topc, so lets
                            drop it
                            [color=blue]
                            >
                            > The weakness and lack of LINEAR scalability of the Oracle implementation
                            > does not show in an OLTP environment such as TPC-C, regardless the number of
                            > simultaneous users that are accessing the data a one time. Of course for the
                            > person that started this thread, their primary interest (and maybe only
                            > interest) is in OLTP processing with failover capabilities.[/color]

                            Thank You. My point entirely. We ARE talking OLTP and HA. RAC is
                            completely in line with the original OPs requirements. Shared nothing,
                            NCR and TPC-H have absolutely nothing to do with what is being
                            discussed. So why do the IBM proponents keep bringing it up ad nausem ?
                            Here's how the dialogeue went.

                            OP: I want a low cost multi-node HA environment for OLTP. Like what
                            Oracle does

                            IBM: IBM has a better parallel shared nothing architecture (and will
                            be easy to migrate to).

                            Daniel: Of course it does, but it's not relevant for the requirements.
                            RAC is relevant. TAF may be relevant too (Note that this is Daniels
                            opininion about shared nothing, btw)

                            IBM: Parallel DB2 has had this function for years

                            Mark A: Daniel, You are confused, RAC and Parallel DB2 are two different
                            things. RAC does failover. Accusations of propoganda fly. (note - Mark A
                            does not point out the confusion in IBM, it's Daniel thats confused)

                            Daniel: I'm not confused. RAC is what the OP does. And RAC does two
                            things - failover and scale out

                            Database Guy: Daniel, You are confused. Where are the RAC scale out
                            numbers/references that compare to IBMs and Teradata's shared nothing
                            implementation ?

                            Me: What has this got to do with the topic. These are the wrong things
                            to be asking. Instead, what you should have asked was where are the
                            scale out and HA numbers and references that are relevant to the OP's
                            palnned usage of RAC. This is what I provided.

                            Mark A: You are confused....


                            I know Daniel rattles your cage on times, but I do not believe he was
                            every confused about what he was saying or proposed. It seemed people
                            just wanted to take him out of context and pick a fight.
                            [color=blue]
                            >
                            > But there are much bigger fish to fry regarding the existing COBOL, CICS,
                            > VSAM, etc.application before one worries about the technical merits of
                            > Oracle vs. DB2.
                            >[/color]

                            Exactly. Note however that Oracle does support COBOL ESQL, CIC's and
                            DRDA access.

                            Comment

                            • Mark Townsend

                              #59
                              Re: yipeee!

                              Mark A wrote:
                              [color=blue]
                              > "Mark Townsend" <mark.townsend@ oracle.com> wrote in message
                              > news:4022F0B2.6 080206@oracle.c om...
                              >[color=green]
                              >> Sigh.
                              >>
                              >> Check out the #1 result at
                              >> http://www.tpc.org/tpcc/results/tpcc_perf_results.asp
                              >> For RAC references go to
                              >>[/color]
                              >
                              >[/color]
                              http://www.oracle.com/ultrasearch/ww...=7&p_Query=RAC[color=blue]
                              >[color=green]
                              >> Did you actually even attempt to look for yourself ?
                              >>[/color]
                              >
                              > Sigh. Again, just like with Daniel, we have failure here to[/color]
                              understand the[color=blue]
                              > difference between a data warehouse application and an OLTP application.[/color]


                              I understand perfectly. I am not confused. I know exactly what each
                              product is capable off, down to the nth degree. It's my job to know, and
                              I'm very,very good at my job.
                              [color=blue]
                              >
                              > The TPC-C benchmarks referred to above are for OLTP, which does not[/color]
                              benefit[color=blue]
                              > from parallel database access to support complex decision support[/color]
                              queries.


                              Exactly right. Still no confusion here
                              [color=blue]
                              > What the TPC-C benchmark measures is the ability to process multiple
                              > transactions at one time, but each transaction is of an OLTP nature
                              > efficiently access small portions of each table.[/color]


                              Exactly right. Still no confusion here
                              [color=blue]
                              > That is completely different from the TPC-H Benchmark that measures the
                              > ability to process a small number of queries that typically access[/color]
                              data from[color=blue]
                              > an entire table (or tables), and benefit by accessing that data in[/color]
                              parallel.


                              Right again. Still no confusion here
                              [color=blue]
                              > TPC-H (formally TPC-D) is why parallel databases (and specifically share
                              > nothing parallel databases) were invented.[/color]


                              Ok - here is where it breaks down. Shared nothing parallelism al la UDB
                              and Teradata is just one way to do TPC-H. I believe that you yourself
                              identified that big SMP is also now a completely viable approach to
                              TPC-H (and the TPC-H benchmarks and Richard Winter VLDB survey also tend
                              to show this). Anyhow, this is taking us even further off-topc, so lets
                              drop it
                              [color=blue]
                              >
                              > The weakness and lack of LINEAR scalability of the Oracle implementation
                              > does not show in an OLTP environment such as TPC-C, regardless the[/color]
                              number of[color=blue]
                              > simultaneous users that are accessing the data a one time. Of course[/color]
                              for the[color=blue]
                              > person that started this thread, their primary interest (and maybe only
                              > interest) is in OLTP processing with failover capabilities.[/color]


                              Thank You. My point entirely. We ARE talking OLTP and HA. RAC is
                              completely in line with the original OPs requirements. Shared nothing,
                              NCR and TPC-H have absolutely nothing to do with what is being
                              discussed. So why do the IBM proponents keep bringing it up ad nausem ?
                              Here's how the dialogeue went.

                              OP: I want a low cost multi-node HA environment for OLTP. Like what
                              Oracle does

                              IBM: IBM has a better parallel shared nothing architecture (and will
                              be easy to migrate to).

                              Daniel: Of course it does, but it's not relevant for the requirements.
                              RAC is relevant. TAF may be relevant too (Note that this is Daniels
                              opininion about shared nothing, btw)

                              IBM: Parallel DB2 has had this function for years

                              Mark A: Daniel, You are confused, RAC and Parallel DB2 are two different
                              things. RAC does failover. Accusations of propoganda fly. (note - Mark A
                              does not point out the confusion in IBM, it's Daniel thats confused)

                              Daniel: I'm not confused. RAC is what the OP does. And RAC does two
                              things - failover and scale out

                              Database Guy: Daniel, You are confused. Where are the RAC scale out
                              numbers/references that compare to IBMs and Teradata's shared nothing
                              implementation ?

                              Me: What has this got to do with the topic. These are the wrong things
                              to be asking. Instead, what you should have asked was where are the
                              scale out and HA numbers and references that are relevant to the OP's
                              palnned usage of RAC. This is what I provided.

                              Mark A: You are confused....


                              I know Daniel rattles your cage on times, but I do not believe he was
                              every confused about what he was saying or proposed. It seemed people
                              just wanted to take him out of context and pick a fight.
                              [color=blue]
                              >
                              > But there are much bigger fish to fry regarding the existing COBOL, CICS,
                              > VSAM, etc.application before one worries about the technical merits of
                              > Oracle vs. DB2.
                              >[/color]

                              Exactly. Note that Oracle does support COBOL ESQL, and both CIC's and
                              DRDA access via Gateways.


                              Comment

                              • Mark A

                                #60
                                Re: yipeee!

                                "Mark Townsend" <markbtownsend@ comcast.net> wrote in message[color=blue]
                                > <snip>
                                > Exactly. Note that Oracle does support COBOL ESQL, and both CIC's and
                                > DRDA access via Gateways.
                                >[/color]
                                I think you mean CICS. I know that Oracle supports CICS on OS/390, but does
                                it support CICS and COBOL on RS/6000 or other UNIX box that CICS may run on?

                                The originator of this thread (Mike) is thinking of picking up the entire
                                OS/390 application and running it on UNIX, probably converting the VSAM to
                                an RDMS (at least as a first step), but keeping the COBOL and CICS. There is
                                no DRDA access via a Gateway in such an architecture.

                                BTW, I may be getting a little ornery at Daniel's trolling on this forum,
                                but I can assure you that I am not confused.


                                Comment

                                Working...