bitwise not - not what I expected

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Elaine Jackson

    bitwise not - not what I expected

    Is there a function that takes a number with binary numeral a1...an to the
    number with binary numeral b1...bn, where each bi is 1 if ai is 0, and vice
    versa? (For example, the function's value at 18 [binary 10010] would be 13
    [binary 01101].) I thought this was what the tilde operator (~) did, but when I
    went to try it I found out that wasn't the case. I discovered by experiment (and
    verified by looking at the documentation) that the tilde operator takes n
    to -(n+1). I can't imagine what that has to do with binary numerals. Can anyone
    shed some light on that? (In case you're curious, I'm writing a script that will
    play Nim, just as a way of familiarizing myself with bitwise operators. Good
    thing, too: I thought I understood them, but apparently I don't.)

    Muchas gracias for any and all helps and hints.

    Peace,
    EJ


  • Graham Fawcett

    #2
    Re: bitwise not - not what I expected

    Elaine Jackson wrote:
    [color=blue]
    >Is there a function that takes a number with binary numeral a1...an to the
    >number with binary numeral b1...bn, where each bi is 1 if ai is 0, and vice
    >versa? (For example, the function's value at 18 [binary 10010] would be 13
    >[binary 01101].) I thought this was what the tilde operator (~) did, but when I
    >went to try it I found out that wasn't the case. I discovered by experiment (and
    >verified by looking at the documentation) that the tilde operator takes n
    >to -(n+1). I can't imagine what that has to do with binary numerals.
    >[/color]

    It has a lot to do with binary! Google for "two's complement".

    In the meantime, try this:
    [color=blue][color=green][color=darkred]
    >>> ~18 & 31[/color][/color][/color]
    13

    The '~' operator cannot care about precision -- that is, how many bits
    you're operating on, or expecting in your result. In your example, you
    represent decimal 18 as '10010', but '000000010010' is also correct,
    right?

    In two's complement math, both inverses, '01101' and '111111101101'
    respectively, are equivalent to decimal -19.

    And-ing with a mask that is the of length 'n' will ensure that you only
    get the least significant n bits -- and this is what you're looking for.
    Since you're operating on five bits in your example, I chose decimal 31,
    or '11111'.

    -- Graham



    Comment

    • Michael Peuser

      #3
      Re: bitwise not - not what I expected


      "Elaine Jackson" <elainejackson7 355@home.com> schrieb im Newsbeitrag
      news:1YD%a.7478 79$3C2.17342633 @news3.calgary. shaw.ca...


      ....
      [color=blue]
      > I'm writing a script that will
      > play Nim, just as a way of familiarizing myself with bitwise operators.[/color]
      Good[color=blue]
      > thing, too: I thought I understood them, but apparently I don't.)
      >[/color]
      Hi Elaine,
      You have described a general misconception you seem to be not the only one
      to live with.
      The enlightening answers having been posted might sufficwe, but a schuld
      liek to add some more "enlightenment" :
      Bit complements have a lot to do with set complents and the aritmetic
      negation (sometimes called two's complement for obvious reasons). Consider
      the set of of "red" of "blue". Now whats the complement? "green" and
      "yellow" is obviously the wrong answer. You in fact cannot give any answer
      befor you define the total set you are dealing with. The same applies to
      logical bit operations. Generally you take a "processor" word or a part of
      it to be defined. Some high level languages are more flexible; and even some
      computers ("vector processors") are.

      The only rule is, that ~(~x) == x

      The same situation with numbers: What is the negation of +5. You have to
      think very hard! This is a trick question and you probably will give a
      "trick answer": -5. You should be aware that this is just a trick. "-5"
      contains no other information as that it is some "complement " of 5. (same
      with complex "imaginary" numbers: 5j (in Python) just says it is some fancy
      5.)

      Now we define a transformation between positive numbers and bit patterns 5 =
      LoL. Note that 5 == ...000005 or LoL == ....ooooLoL does not help any
      understanding so you generally skip this part.

      Now you do some arithmetic "inversion" : 5 -> -5 This however can (and
      should) stay a secret of the processor! By no means should you be interested
      in how the machine represents "-5". If you are courious then know that
      there had been times when computers represente -5 as ...LLLoLo. Yes it
      worked! And you had two diffrent "zeros" then: +0 and -0 !!!!

      Most computers do not distinguish between the representation of negativ
      numbers and complemented sets (let alone note a special "total set" the
      complemt was referring to). Thus the "secret" of modern two's-complement
      computern arithmetic is always disclosed to you.

      Note that there is no use in something like "masking" the MSB, i.e. that
      bits-complements only work on 31 bits. This will lead to ~5 == ~LoL == ~
      LL..LLLLoL == 2,147,483,643 Not much improvement, eh!?

      Kindly
      Michael P


      Comment

      • Irmen de Jong

        #4
        Re: bitwise not - not what I expected

        While others explained how the ~ operator works, let me suggest
        another possibility: the bitwise exclusive or.
        [color=blue][color=green][color=darkred]
        >>> def bin(i):[/color][/color][/color]
        .... l = ['0000', '0001', '0010', '0011', '0100', '0101', '0110', '0111',
        .... '1000', '1001', '1010', '1011', '1100', '1101', '1110', '1111']
        .... s = ''.join(map(lam bda x, l=l: l[int(x, 16)], hex(i)[2:]))
        .... if s[0] == '1' and i > 0:
        .... s = '0000' + s
        .... return s
        ....[color=blue][color=green][color=darkred]
        >>> bin(18)[/color][/color][/color]
        '00010010'[color=blue][color=green][color=darkred]
        >>> ~18[/color][/color][/color]
        -19[color=blue][color=green][color=darkred]
        >>> bin(~18) # tricky...[/color][/color][/color]
        '11111111111111 111111111111101 101'[color=blue][color=green][color=darkred]
        >>> ~18 & 0x1f[/color][/color][/color]
        13[color=blue][color=green][color=darkred]
        >>> bin(~18 & 0x1f)[/color][/color][/color]
        '00001101'[color=blue][color=green][color=darkred]
        >>> 18 ^ 0x1f # XOR![/color][/color][/color]
        13[color=blue][color=green][color=darkred]
        >>> bin(18 ^ 0x1f) # XOR[/color][/color][/color]
        '00001101'[color=blue][color=green][color=darkred]
        >>>[/color][/color][/color]


        You still have to think about the number of bits you want to invert.
        x ^ 0x1f inverts the 5 least significant bits of x.
        x ^ 0xff inverts the 8 least significant bits of x, and so on.


        --Irmen de Jong

        Comment

        • Dennis Lee Bieber

          #5
          Re: bitwise not - not what I expected

          Elaine Jackson fed this fish to the penguins on Saturday 16 August 2003
          09:58 pm:
          [color=blue]
          >
          >
          > Is there a function that takes a number with binary numeral a1...an to
          > the number with binary numeral b1...bn, where each bi is 1 if ai is 0,
          > and vice versa? (For example, the function's value at 18 [binary
          > 10010] would be 13
          > [binary 01101].) I thought this was what the tilde operator (~) did,
          > [but when I
          > went to try it I found out that wasn't the case. I discovered by
          > experiment (and verified by looking at the documentation) that the
          > tilde operator takes n to -(n+1). I can't imagine what that has to do
          > with binary numerals. Can anyone shed some light on that? (In case
          > you're curious, I'm writing a script that will play Nim, just as a way
          > of familiarizing myself with bitwise operators. Good thing, too: I
          > thought I understood them, but apparently I don't.)
          >
          > Muchas gracias for any and all helps and hints.
          >[/color]
          You've had lots of answers at the moment though I haven't seen anyone
          explain away the "+1" part...

          Most computers use twos-complement arithmetic to avoid the problem of
          having two valid values for integer 0, which is what appears in ones
          complement arithmetic.

          For argument, assume an 8-bit integer. The value of "5" would be
          represented as 00000101. The one's complement negative would be
          11111010. So far there isn't any problem... But consider the value of
          0, represented as 00000000. A one's complement negative would become
          11111111 -- But mathematically, +0 = -0; in one's complement math, this
          does not hold true.

          So a little trick is played, to create twos complement... To negate a
          number, we take the ones complement, and then add 1 to the result. The
          "5" then goes through: 00000101 -> 11111010 + 1 -> 11111011... Looks
          strange, doesn't it... But watch what happens to that 8-bit 0: 00000000
          -> 11111111 + 1 -> (overflows) 00000000.... Negative 0 is the same as
          positive 0.

          So when you complemented your number, you first neglected to take into
          account that you complement the entire bit width, including all those 0
          bits to the left, and then when displaying the result, were confused by
          what the computer does to display... Namely, seeing a MSB set to 1, it
          interpreted the result as a negative number, put out a "-" sign, then
          generated a twos complement to create a positive value for output. The
          twos complement has that +1 step, so the ones complement "18" became
          "19"


          --[color=blue]
          > =============== =============== =============== =============== == <
          > wlfraed@ix.netc om.com | Wulfraed Dennis Lee Bieber KD6MOG <
          > wulfraed@dm.net | Bestiaria Support Staff <
          > =============== =============== =============== =============== == <
          > Bestiaria Home Page: http://www.beastie.dm.net/ <
          > Home Page: http://www.dm.net/~wulfraed/ <[/color]

          Comment

          • Michael Peuser

            #6
            Re: bitwise not - not what I expected


            "Dennis Lee Bieber" <wlfraed@ix.net com.com> schrieb im Newsbeitrag
            news:c84511-ol3.ln1@beastie .ix.netcom.com. ..[color=blue]
            > Elaine Jackson fed this fish to the penguins on Saturday 16 August 2003
            > 09:58 pm:
            >[color=green]
            > >
            > >
            > > Is there a function that takes a number with binary numeral a1...an to
            > > the number with binary numeral b1...bn, where each bi is 1 if ai is 0,
            > > and vice versa? (For example, the function's value at 18 [binary
            > > 10010] would be 13
            > > [binary 01101].) I thought this was what the tilde operator (~) did,
            > > [but when I
            > > went to try it I found out that wasn't the case. I discovered by
            > > experiment (and verified by looking at the documentation) that the
            > > tilde operator takes n to -(n+1). I can't imagine what that has to do
            > > with binary numerals.[/color][/color]

            [..]
            [color=blue]
            > You've had lots of answers at the moment though I haven't seen[/color]
            anyone[color=blue]
            > explain away the "+1" part...
            >
            > Most computers use twos-complement arithmetic to avoid the problem[/color]
            of[color=blue]
            > having two valid values for integer 0, which is what appears in ones
            > complement arithmetic.
            >
            > For argument, assume an 8-bit integer. The value of "5" would be
            > represented as 00000101. The one's complement negative would be
            > 11111010. So far there isn't any problem... But consider the value of
            > 0, represented as 00000000. A one's complement negative would become
            > 11111111 -- But mathematically, +0 = -0; in one's complement math, this
            > does not hold true.
            >
            > So a little trick is played, to create twos complement... To[/color]
            negate a[color=blue]
            > number, we take the ones complement, and then add 1 to the result. The
            > "5" then goes through: 00000101 -> 11111010 + 1 -> 11111011... Looks
            > strange, doesn't it... But watch what happens to that 8-bit 0: 00000000
            > -> 11111111 + 1 -> (overflows) 00000000.... Negative 0 is the same as
            > positive 0.[/color]

            [..]

            I have the impression (may be wrong) that you are working under the
            misconception that there can be a "natural" binary represensation of
            negative numbers!?
            Three conventions have commonly been used so far:
            1- Complement
            2-Complement
            Sign tag plus absolut binary values

            All of them have their pros and cons. For a mixture of very technical
            reasons (you mentioned the +0/-0 conflict, I might add the use of binary
            adders for subtraction) most modern computers use 2-complement, and this now
            leads to those funny speculations in this thread. ;-)

            Kindly
            Michael P


            Comment

            • Dennis Lee Bieber

              #7
              Re: bitwise not - not what I expected

              Michael Peuser fed this fish to the penguins on Sunday 17 August 2003
              02:41 pm:
              [color=blue]
              >
              > I have the impression (may be wrong) that you are working under the
              > misconception that there can be a "natural" binary represensation of
              > negative numbers!?[/color]

              Apologies if I gave that impression... the +/- 0 technical affair is
              the main reason I went into the whole thing...
              [color=blue]
              > Three conventions have commonly been used so far:
              > 1- Complement
              > 2-Complement
              > Sign tag plus absolut binary values
              >
              > All of them have their pros and cons. For a mixture of very technical
              > reasons (you mentioned the +0/-0 conflict, I might add the use of
              > binary adders for subtraction) most modern computers use 2-complement,
              > and this now leads to those funny speculations in this thread. ;-)
              >[/color]
              From a human readable standpoint, your third option is probably the
              most "natural"; after all, what is -19 in human terms but a "pure" 19
              prefaced with a negation tag marker... (I believe my college
              mainframe's BCD hardware unit actually put the sign marker in the
              nibble representing the decimal point location -- but it has been 25
              years since I had to know what a Xerox Sigma did for COBOL packed
              decimal <G>).

              ie, 00010011 vs -00010011 <G>

              1s complement is electrically easy; just "not" each bit.

              2s complement is mathematically cleaner as 0 is 0, but requires an
              adder to the 1s complement circuit... Though both complement styles
              lead to the ambiguity of signed vs unsigned values

              --[color=blue]
              > =============== =============== =============== =============== == <
              > wlfraed@ix.netc om.com | Wulfraed Dennis Lee Bieber KD6MOG <
              > wulfraed@dm.net | Bestiaria Support Staff <
              > =============== =============== =============== =============== == <
              > Bestiaria Home Page: http://www.beastie.dm.net/ <
              > Home Page: http://www.dm.net/~wulfraed/ <[/color]

              Comment

              • Elaine Jackson

                #8
                Re: bitwise not - not what I expected


                "Michael Peuser" <mpeuser@web.de > wrote in message
                news:bhosrr$u57 $06$1@news.t-online.com...

                | I have the impression (may be wrong) that you are working under the
                | misconception that there can be a "natural" binary represensation of
                | negative numbers!?
                | Three conventions have commonly been used so far:
                | 1- Complement
                | 2-Complement
                | Sign tag plus absolut binary values

                The last alternative sounds like what I was assuming. If it is, I would argue
                that it's pretty darn natural. Here's a little function to illustrate what I
                mean:

                def matilda(n): ## "my tilde"
                if 0<=n<pow(2,29) :
                for i in range(1,31):
                iOnes=pow(2,i)-1
                if n<=iOnes:
                return iOnes-n
                else:
                raise


                Comment

                • Grant Edwards

                  #9
                  Re: bitwise not - not what I expected

                  In article <bhosrr$u57$06$ 1@news.t-online.com>, Michael Peuser wrote:
                  [color=blue]
                  > I have the impression (may be wrong) that you are working under the
                  > misconception that there can be a "natural" binary represensation of
                  > negative numbers!?
                  > Three conventions have commonly been used so far:
                  > 1- Complement
                  > 2- Complement
                  > Sign tag plus absolut binary values
                  >
                  > All of them have their pros and cons. For a mixture of very technical
                  > reasons (you mentioned the +0/-0 conflict, I might add the use of binary
                  > adders for subtraction)[/color]

                  The latter is _far_ more important than the former. Being able
                  to use a simple binary adder to do operations on either signed
                  or unsigned values is a huge savings in CPU and ISA design. I
                  doubt that anybody really cares about the +0 vs. -0 issue very
                  much (IEEE FP has zeros of both signs, and nobody seems to
                  care).
                  [color=blue]
                  > most modern computers use 2-complement, and this now leads to
                  > those funny speculations in this thread. ;-)[/color]

                  --
                  Grant Edwards grante Yow! An Italian is COMBING
                  at his hair in suburban DES
                  visi.com MOINES!

                  Comment

                  • Michael Peuser

                    #10
                    Re: bitwise not - not what I expected


                    "Bengt Richter" <bokr@oz.net> schrieb im Newsbeitrag
                    news:bhpjm1$g01 $0@216.39.172.1 22...[color=blue]
                    > On Sun, 17 Aug 2003 22:18:39 GMT, Dennis Lee Bieber[/color]
                    <wlfraed@ix.net com.com> wrote:[color=blue]
                    >[color=green]
                    > >Michael Peuser fed this fish to the penguins on Sunday 17 August 2003
                    > >02:41 pm:
                    > >[color=darkred]
                    > >> I have the impression (may be wrong) that you are working under the
                    > >> misconception that there can be a "natural" binary represensation of
                    > >> negative numbers!?[/color]
                    > >
                    > > Apologies if I gave that impression... the +/- 0 technical affair[/color][/color]
                    is[color=blue][color=green]
                    > >the main reason I went into the whole thing...
                    > >[color=darkred]
                    > >> Three conventions have commonly been used so far:
                    > >> 1- Complement
                    > >> 2-Complement
                    > >> Sign tag plus absolut binary values
                    > >>
                    > >> All of them have their pros and cons. For a mixture of very technical
                    > >> reasons (you mentioned the +0/-0 conflict, I might add the use of
                    > >> binary adders for subtraction) most modern computers use 2-complement,
                    > >> and this now leads to those funny speculations in this thread. ;-)
                    > >>[/color]
                    > > From a human readable standpoint, your third option is probably[/color][/color]
                    the[color=blue][color=green]
                    > >most "natural"; after all, what is -19 in human terms but a "pure" 19
                    > >prefaced with a negation tag marker... (I believe my college
                    > >mainframe's BCD hardware unit actually put the sign marker in the
                    > >nibble representing the decimal point location -- but it has been 25
                    > >years since I had to know what a Xerox Sigma did for COBOL packed
                    > >decimal <G>).
                    > >
                    > > ie, 00010011 vs -00010011 <G>
                    > >
                    > > 1s complement is electrically easy; just "not" each bit.
                    > >
                    > > 2s complement is mathematically cleaner as 0 is 0, but requires[/color][/color]
                    an[color=blue][color=green]
                    > >adder to the 1s complement circuit... Though both complement styles
                    > >lead to the ambiguity of signed vs unsigned values
                    > >[/color]
                    > Everyone says "two's complement" and then usually starts talking about[/color]
                    numbers[color=blue]
                    > that are bigger than two. I'll add another interpretation, which is what I[/color]
                    thought[color=blue]
                    > when I first heard of it w.r.t. a cpu that was designed on the basis that[/color]
                    all its[color=blue]
                    > "integer" numbers were fixed point fractions up to 0.9999.. to whatever[/color]
                    precision[color=blue]
                    > the binary fractional bits provided. There was no units bit. And if you[/color]
                    took one[color=blue]
                    > of these fractional values 0.xxxx and subtracted it from 2.0, you would[/color]
                    have a[color=blue]
                    > complementary number with respect to two. Well, for addition and[/color]
                    subtraction, that turns[color=blue]
                    > out to work just like the "two's complement" integers we are used to. But[/color]
                    since the[color=blue]
                    > value of fractional bits were all in negative powers of two, squaring[/color]
                    e.g., .5 had[color=blue]
                    > to result in a consistent representation of 0.25 -- i.e. in binary[/color]
                    squaring 0.1[color=blue]
                    > resulted in 0.01 -- which is shifted one bit from what you get looking at[/color]
                    the numbers[color=blue]
                    > as integers with the lsb at the bottom of the registers and the result.
                    >
                    > I.e., a 32-bit positive integer n in the fractional world was n*2**-31. If[/color]
                    you square[color=blue]
                    > that for 64 bits, you get n**2, but in the fractional world that looks[/color]
                    like (n**2)*2**-63,[color=blue]
                    > where it's supposed to be (n*2**-31)**2 => (n**2)*2**-62 with respect to[/color]
                    the binary point.[color=blue]
                    > The fractional model preserved an extra bit of precision in multiplies.
                    >
                    > So on that machine we used to count bits from the left instead of the[/color]
                    right, and place imaginary[color=blue]
                    > binary points in the representations , so a binary 0.101 could be read as[/color]
                    "5 at 3" or "2.5 at 2"[color=blue]
                    > or "10 at 4" etc. And the multiplying rule was x at xbit times y at ybit[/color]
                    => x*y at xbit+ybit.[color=blue]
                    >
                    > You can do the same counting the bit positions leftwards from lsb at 0, as[/color]
                    we usually do now,[color=blue]
                    > of course, to play with fixed point fractions. A 5 at 0 is then 1.25 at 2[/color]
                    ;-)[color=blue]
                    >
                    > Anyway, my point is that there was a "two's complement" implementation[/color]
                    that really meant[color=blue]
                    > a numeric value complement with respect to the value two ;-)
                    >
                    > Regards,
                    > Bengt Richter[/color]


                    A very good point! I might add that this is my no means an exotic feature.
                    Mathematically speaking there is great charme in computing just inside the
                    invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo real
                    operations. You have get track of the scale of course - it is a little bit
                    like working with sliding rules if anyone can remember those tools ;-)

                    Even modern chips have support for this format, e.g. there is the 5$ Atmel
                    Mega AVR which has two kinds of multiplication instructions: one for the
                    integer multiplication and one which automatically adds a left shift after
                    the multiplication! I leave it as an exercise to find out why this is
                    necessary when multiplying fractional numbers ;-)

                    Negative numbers are formed according to the same rule for fractionals and
                    integers:
                    Take the maximum positive number: 2**32-1 or 0.999999
                    Extend your scope
                    Add one bit: 2*32 or 1
                    Double it: 2*33 or 2
                    Subtract the number in question
                    Reduce your scope again

                    Kindly Michael P


                    Comment

                    • Grant Edwards

                      #11
                      Re: bitwise not - not what I expected

                      In article <bhq01j$tge$03$ 1@news.t-online.com>, Michael Peuser wrote:
                      [color=blue]
                      > A very good point! I might add that this is my no means an exotic feature.
                      > Mathematically speaking there is great charme in computing just inside the
                      > invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo real
                      > operations. You have get track of the scale of course - it is a little bit
                      > like working with sliding rules if anyone can remember those tools ;-)[/color]

                      Sure. I've got two sitting at home. :)

                      FWIW, it used to be fairly common for process-control systems
                      to define operations only over the interval (-1,+1). This made
                      implimentation easy, and the input and output devices
                      (temp/pressure sensors, valves, whatnot) all had pre-defined
                      ranges that mapped logically to the (-1,+1) interval.

                      --
                      Grant Edwards grante Yow! The SAME WAVE keeps
                      at coming in and COLLAPSING
                      visi.com like a rayon MUU-MUU...

                      Comment

                      • Michael Peuser

                        #12
                        Re: bitwise not - not what I expected


                        "Grant Edwards" <grante@visi.co m> schrieb im Newsbeitrag
                        news:3f40d077$0 $164$a1866201@n ewsreader.visi. com...[color=blue]
                        > In article <bhq01j$tge$03$ 1@news.t-online.com>, Michael Peuser wrote:
                        >[color=green]
                        > > A very good point! I might add that this is my no means an exotic[/color][/color]
                        feature.[color=blue][color=green]
                        > > Mathematically speaking there is great charme in computing just inside[/color][/color]
                        the[color=blue][color=green]
                        > > invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo[/color][/color]
                        real[color=blue][color=green]
                        > > operations. You have get track of the scale of course - it is a little[/color][/color]
                        bit[color=blue][color=green]
                        > > like working with sliding rules if anyone can remember those tools ;-)[/color]
                        >
                        > Sure. I've got two sitting at home. :)
                        >
                        > FWIW, it used to be fairly common for process-control systems
                        > to define operations only over the interval (-1,+1). This made
                        > implimentation easy, and the input and output devices
                        > (temp/pressure sensors, valves, whatnot) all had pre-defined
                        > ranges that mapped logically to the (-1,+1) interval.
                        >
                        > --[/color]
                        Yes it simplifies a lot of matters, even when using full floating point
                        numbers. Take OpenGL e.g. The colour space is a 1x1x1 cube. Very fine! No
                        magic numbers near 256 ;-)

                        Kindly
                        Michael P


                        Comment

                        • Dennis Lee Bieber

                          #13
                          Re: bitwise not - not what I expected

                          Grant Edwards fed this fish to the penguins on Monday 18 August 2003
                          06:11 am:
                          [color=blue]
                          >
                          > Sure. I've got two sitting at home. :)
                          >[/color]
                          Only two? <G>

                          I've got five (the most recent, new-in-box, cost me more than I paid
                          for an HP25 back in 1978 <G>)... Regretably, I missed my chance at a
                          lovely plastic over bamboo laminate back then... As I recall, a
                          deci-trig log-log model, being cleared out by my college bookstore at
                          half price (which put it about $25 -- the HP25 was $100 or so, and also
                          a clear-out as no one else was smart enough to buy an RPN calculator).

                          --[color=blue]
                          > =============== =============== =============== =============== == <
                          > wlfraed@ix.netc om.com | Wulfraed Dennis Lee Bieber KD6MOG <
                          > wulfraed@dm.net | Bestiaria Support Staff <
                          > =============== =============== =============== =============== == <
                          > Bestiaria Home Page: http://www.beastie.dm.net/ <
                          > Home Page: http://www.dm.net/~wulfraed/ <[/color]

                          Comment

                          Working...