Python from Wise Guy's Viewpoint

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Terry Reedy

    #46
    Re: Python from Wise Guy's Viewpoint


    "Markus Mottl" <markus@oefai.a t> wrote in message
    news:bn1017$m39 $1@bird.wu-wien.ac.at...[color=blue]
    > Note that I am not defending ADA in any way or arguing against FPLs:[/color]
    in[color=blue]
    > fact, being an FPL-advocate myself I do think that FPLs (including[/color]
    Lisp)[color=blue]
    > have an edge what concerns writing safe code. But the Ariane-example[/color]
    just[color=blue]
    > doesn't support this claim. It was an absolutely horrible management
    > mistake to not check old code for compliance with the new spec. End
    > of story...[/color]

    The investigating commission reported about 5 errors that, in series,
    allowed the disaster. As I remember, another nonprogrammer/language
    one was in mockup testing. The particular black box, known to be
    'good', was not included, but just simulated according to its expected
    behavior. If it has been included, and a flight similated in real
    time with appropriate tilting and shaking, it should probably have
    given the spurious abort message that it did in the real flight.

    TJR


    Comment

    • Erann Gat

      #47
      Re: Python from Wise Guy's Viewpoint

      In article <OAUkb.12491$pT 1.1778@twister. nyc.rr.com>, Kenny Tilton
      <ktilton@nyc.rr .com> wrote:

      [Discussing the Arianne failure]
      [color=blue]
      > A small side-note: as I now understand things, the idea was not to abort
      > the mission, but to bring down the system. The thinking was that the
      > error would signify a hardware failure, and with any luck shutting down
      > would mean either loss of the backup system (if that was where the HW
      > fault occurred) or correctly falling back on the still-functioning
      > backup system if the supposed HW fault had been in the primary unit. ie,
      > an HW fault would likely be isolated to one unit.[/color]

      That's right. This is why hardware folks spend a lot of time thinking
      about common mode failures, and why software folks could learn a thing or
      two from the hardware folks in this regard.

      E.

      Comment

      • Alex Martelli

        #48
        Re: Python from Wise Guy's Viewpoint

        Gerrit Holl wrote:
        [color=blue]
        > Hannu Kankaanp?? wrote:[color=green]
        >> Anyway, as a conclusion, I believe you'd be much happier with
        >> Ruby than with Python. It doesn't do this weird "statement vs
        >> expression" business, it has optional return, it has optional
        >> parens with function calls, and probably more of these things
        >> "fixed" that you consider Python's downsides. You're trying to
        >> make Python into a language that already exists, it seems, but
        >> for some reason Pythonistas are happy with Python and not rapidly
        >> converting to Ruby or Haskell.[/color]
        >
        > I wonder to what extent this statement is true. I know at least
        > 1 Ruby programmer who came from Python, but this spot check should
        > not be trusted, since I know only 1 Ruby programmer and only 1
        > former Python programmer <g>. But I have heard that there are a
        > lot of former Python programmers in the Ruby community. I think
        > it is safe to say that of all languages Python programmers migrate
        > to, Ruby is the strongest magnet. OTOH, the migration of this part
        > of the Python community to Ruby may have been completed already,
        > of course.[/color]

        Python and Ruby are IMHO very close, thus "compete" for roughly
        the same "ecological niche". I still don't have enough actual
        experience in "production " Ruby code to be able to say for sure,
        but my impression so far is that -- while no doubt there's a LOT
        of things for which they're going to be equally good -- Python's
        simplicity and uniformity help with application development for
        larger groups of programmers, while Ruby's extreme dynamism and
        more variegated style may be strengths for experimentation , or
        projects with one, or few and very well-attuned and experienced,
        developers. I keep coming back to Python (e.g. because I have
        no gmpy in Ruby for my own pet personal projects...:-) but I do
        mean to devote more of my proverbial "copious spare time" to
        Ruby explorations (e.g., porting gmpy, otherwise it's unlikely
        I'll ever get all that much combinatorial arithmetics done...;-).


        Alex

        Comment

        • Joachim Durchholz

          #49
          Re: Python from Wise Guy's Viewpoint

          Pascal Bourguignon wrote:[color=blue]
          > The post at that url writes about the culture of the Ariane team, but
          > I would say that it's even a more fundamental problem of our culture
          > in general: we build brittle stuff with very little margin for error.
          > Granted, it would be costly to increase physical margin,[/color]

          Which is exactly why the margin is kept as small as possible.
          Occasionally, it will be /too/ small.

          Anybody seen a car model series, every one working perfectly from the
          first one?
          From what I read, every new model has its small quirks and
          "near-perfect" gotchas. The difference is just that you're not allowed
          to do that in expensive things like rockets (which is, among many other
          things, one of the reasons why space vehicles and aircraft are so d*mn
          expensive: if something goes wrong, you can't just drive them on the
          nearest parking lot and wait for maintenance and repair...)
          [color=blue]
          > but in this
          > case, adopting a point of view more like _robotics_ could help. Even
          > in case of hardware failure, there's no reason to shut down the mind;
          > just go on with what you have.[/color]

          As Steve wrote, letting a rocket carry on regardless isn't a good idea
          in the general case: it would be a major disaster if it made it to the
          next coast and crashed into the next town. Heck, it would be enough if
          the fuel tanks leaked, and the whole fuel rained down on a ship
          somewhere in the Atlantic - most rocket fuels are toxic.

          Regards,
          Jo

          Comment

          • Pascal Bourguignon

            #50
            Re: Python from Wise Guy's Viewpoint

            Steve Schafer <see@reply.to.h eader> writes:
            [color=blue]
            > On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
            > <spam@thalassa. informatimago.c om> wrote:
            >[color=green]
            > >Even in case of hardware failure, there's no reason to shut down the
            > >mind; just go on with what you have.[/color]
            >
            > When the thing that failed is a very large rocket having a very large
            > momentum, and containing a very large amount of very volatile fuel, it
            > makes sense to give up and shut down in the safest possible way.[/color]

            You have to define a "dangerous" situation. Remember that this
            "safest possible way" is usually to blow the rocket up. AFAIK, while
            this parameter was out of range, there was no instability and the
            rocket was not uncontrolable.

            [color=blue]
            > Also keep in mind that this was a "can't possibly happen" failure
            > scenario. If you've deemed that it is something that can't possibly
            > happen, you are necessarily admitting that you have no idea how to
            > respond in a meaningful way if it somehow does happen.[/color]

            My point. This "can't possibly happen" failure did happen, so clearly
            it was not a "can't possibly happen" physically, which means that the
            problem was with the software. We know it, but what I'm saying is that
            a smarter software could have deduced it on fly.

            We all agree that it would be better to have a perfect world and
            perfect, bug-free, software. But since that's not the case, I'm
            saying that instead of having software that behaves like simple unix C
            tools, where as soon as there is an unexpected situation, it calls
            perror() and exit(), it would be better to have smarter software that
            can try and handle UNEXPECTED error situations, including its own
            bugs. I would feel safer in an AI rocket.


            --
            __Pascal_Bourgu ignon__

            Do not adjust your mind, there is a fault in reality.
            Lying for having sex or lying for making war? Trust US presidents :-(

            Comment

            • Tim Sweeney

              #51
              Re: Python from Wise Guy's Viewpoint

              > THE GOOD:[color=blue]
              > THE BAD:
              >
              > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
              > 90% of the code is function applictions. Why not make it convenient?
              >
              > 9. Syntax for arrays is also bad [a (b c d) e f] would be better
              > than [a, b(c,d), e, f][/color]

              Agreed with your analysis, except for these two items.

              #1 is a matter of opinion, but in general:

              - f(x,y) is the standard set by mathematical notation and all the
              mainstream programming language families, and is library neutral:
              calling a curried function is f(x)(y), while calling an uncurried
              function is f(x,y).

              - "f x y" is unique to the Haskell and LISP families of languages, and
              implies that most library functions are curried. Otherwise you have a
              weird asymmetry between curried calls "f x y" and uncurried calls
              which translate back to "f(x,y)". Widespread use of currying can lead
              to weird error messages when calling functions of many parameters: a
              missing third parameter in a call like f(x,y) is easy to report, while
              with curried notation, "f x y" is still valid, yet results in a type
              other than what you were expecting, moving the error up the AST to a
              less useful obvious.

              I think #9 is inconsistent with #1.

              In general, I'm wary of notations like "f x" that use whitespace as an
              operator (see http://www.research.att.com/~bs/whitespace98.pdf).

              Comment

              • Marcin 'Qrczak' Kowalczyk

                #52
                Re: Python from Wise Guy's Viewpoint

                On Mon, 20 Oct 2003 13:52:14 -0700, Tim Sweeney wrote:
                [color=blue]
                > - "f x y" is unique to the Haskell and LISP families of languages, and
                > implies that most library functions are curried.[/color]

                No, Lisp doesn't curry. It really writes "(f x y)", which is different
                from "((f x) y)" (which is actually Scheme, not Lisp).

                In fact the syntax "f x y" without mandatory parens fits non-lispish
                non-curried syntaxes too. The space doesn't have to be left- or
                right-associative; it just binds all arguments at once, and this
                expression is different both from "f (x y)" and "(f x) y".

                The only glitch is that you have to express application to 0 arguments
                somehow. If you use "f()", you can't use "()" as an expression (for
                empty tuple for example). But when you accept it, it works. It's my
                favorite function application syntax.

                --
                __("< Marcin Kowalczyk
                \__/ qrczak@knm.org. pl
                ^^ http://qrnik.knm.org.pl/~qrczak/

                Comment

                • Andrew Dalke

                  #53
                  Re: Python from Wise Guy's Viewpoint

                  Pascal Bourguignon:[color=blue]
                  > We all agree that it would be better to have a perfect world and
                  > perfect, bug-free, software. But since that's not the case, I'm
                  > saying that instead of having software that behaves like simple unix C
                  > tools, where as soon as there is an unexpected situation, it calls
                  > perror() and exit(), it would be better to have smarter software that
                  > can try and handle UNEXPECTED error situations, including its own
                  > bugs. I would feel safer in an AI rocket.[/color]

                  Since it was written in Ada and not C, and since it properly raised
                  an exception at that point (as originally designed), which wasn't
                  caught at a recoverable point, ending up in the default "better blow
                  up than kill people" handler ... what would your AI rocket have
                  done with that exception? How does it decide that an UNEXPECTED
                  error situation can be recovered? How would you implement it?
                  How would you test it? (Note that the above software wasn't
                  tested under realistic conditions; I assume in part because of cost.)

                  I agree it would be better to have software which can do that.
                  I have no good idea of how that's done. (And bear in mind that
                  my XEmacs session dies about once a year, eg, once when NFS
                  was acting flaky underneath it and a couple times because it
                  couldn't handle something X threw at it. ;)

                  The best examples of resilent architectures I've seen come from
                  genetic algorithms and other sorts of feedback training; eg,
                  subsumptive architectures for robotics and evolvable hardware.
                  There was a great article in CACM on programming an FPGA
                  via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
                  I recall) but pointed out the hard part about this approach is
                  that it's hard to understand, and the result used various defects
                  on the chip (part of the circuit wasn't used but the chip wouldn't
                  work without it) which makes the result harder to mass produce.

                  Andrew
                  dalke@dalkescie ntific.com


                  Comment

                  • Steve Schafer

                    #54
                    Re: Python from Wise Guy's Viewpoint

                    On 20 Oct 2003 22:08:30 +0200, Pascal Bourguignon
                    <spam@thalassa. informatimago.c om> wrote:
                    [color=blue]
                    >AFAIK, while this parameter was out of range, there was no instability
                    >and the rocket was not uncontrolable.[/color]

                    That's perfectly true, but also perfectly irrelevant. When your
                    carefully designed software has just told you that your rocket, which,
                    you may recall, is traveling at several thousand metres per second, has
                    just entered a "can't possibly happen" state, you don't exactly have a
                    lot of time in which to analyze all of the conflicting information and
                    decide which to trust and which not to trust. Whether that sort of
                    decision-making is done by engineers on the ground or by human pilots or
                    by some as yet undesigned intelligent flight control system, the answer
                    is the same: Do the safe thing first, and then try to figure out what
                    happened.

                    All well-posed problems have boundary conditions, and the solutions to
                    those problems are bounded as well. No matter what the problem or its
                    means of solution, a boundary is there, and if you somehow cross that
                    boundary, you're toast. In particular, the difficulty with AI systems is
                    that while they can certainly enlarge the boundary, they also tend to
                    make it fuzzier and less predictable, which means that testing becomes
                    much less reliable. There are numerous examples where human operators
                    have done the "sensible" thing, with catastrophic consequences.
                    [color=blue]
                    >My point.[/color]

                    Well, actually, no. I assure you that my point is very different from
                    yours.
                    [color=blue]
                    >This "can't possibly happen" failure did happen, so clearly it was not
                    >a "can't possibly happen" physically, which means that the problem was
                    >with the software.[/color]

                    No, it still was a "can't possibly happen" scenario, from the point of
                    view of the designed solution. And there was nothing wrong with the
                    software. The difficulty arose because the solution for one problem was
                    applied to a different problem (i.e., the boundary was crossed).
                    [color=blue]
                    >it would be better to have smarter software that can try and handle
                    >UNEXPECTED error situations[/color]

                    I think you're failing to grasp the enormity of the concept of "can't
                    possibly happen." There's a big difference between merely "unexpected "
                    and "can't possibly happen." "Unexpected " most often means that you
                    haven't sufficiently analyzed the situation. "Can't possibly happen," on
                    the other hand, means that you've analyzed the situation and determined
                    that the scenario is outside the realm of physical or logical
                    possibility. There is simply no meaningful means of recovery from a
                    "can't possibly happen" scenario. No matter how smart your software is,
                    there will be "can't possibly happen" scenarios outside the boundary,
                    and your software is going to have to shut down.
                    [color=blue]
                    >I would feel safer in an AI rocket.[/color]

                    What frightens me most is that I know that there are engineers working
                    on safety-critical systems that feel the same way. By all means, make
                    your flight control system as sophisticated and intelligent as you want,
                    but don't forget to include a simple, reliable, dumber-than-dirt
                    ejection system that "can't possibly fail" when the "can't possibly
                    happen" scenario happens.

                    Let me try to summarize the philosophical differences here: First of
                    all, I wholeheartedly agree that a more sophisticated software system
                    _may_ have prevented the destruction of the rocket. Even so, I think the
                    likelihood of that is rather small. (For some insight into why I think
                    so, you might want to take a look at Henry Petroski's _To Engineer is
                    Human_.) Where we differ is how much impact we believe that more
                    sophisticated software would have on the problem. I get the impression
                    that you believe that an AI-based system would drastically reduce
                    (perhaps even eliminate?) the "can't possibly happen" scenario. I, on
                    the other hand, believe that even the most sophisticated system enlarges
                    the boundary of the solution space by only a very small amount--the area
                    occupied by "can't possibly happen" scenarios remains far greater than
                    that occupied by "software works correctly and saves the rocket"
                    scenarios.

                    -Steve

                    Comment

                    • Matthew Danish

                      #55
                      Re: Python from Wise Guy's Viewpoint

                      On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote:[color=blue][color=green]
                      > > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
                      > > 90% of the code is function applictions. Why not make it convenient?
                      > >
                      > > 9. Syntax for arrays is also bad [a (b c d) e f] would be better
                      > > than [a, b(c,d), e, f][/color]
                      > #1 is a matter of opinion, but in general:
                      >
                      > - f(x,y) is the standard set by mathematical notation and all the
                      > mainstream programming language families, and is library neutral:
                      > calling a curried function is f(x)(y), while calling an uncurried
                      > function is f(x,y).[/color]

                      And lambda notation is: \xy.yx or something like that. Math notation is
                      rather ad-hoc, designed for shorthand scribbling on paper, and in
                      general a bad idea to imitate for programming languages which are
                      written on the computer in an ASCII editor (which is one thing which
                      bothers me about ML and Haskell).
                      [color=blue]
                      > - "f x y" is unique to the Haskell and LISP families of languages, and
                      > implies that most library functions are curried. Otherwise you have a
                      > weird asymmetry between curried calls "f x y" and uncurried calls
                      > which translate back to "f(x,y)".[/color]

                      Here's an "aha" moment for you:

                      In Haskell and ML, the two biggest languages with built-in syntactic
                      support for currying, there is also a datatype called a tuple (which is
                      a record with positional fields). All functions, in fact, only take a
                      single argument. The trick is that the syntax for tuples and the syntax
                      for currying combine to form the syntax for function calling:

                      f (x, y, z) ==> calling f with a tuple (x, y, z)
                      f x (y, z) ==> calling f with x, and then calling the result with (y, z).

                      This, I think, is a win for a functional language. However, in a
                      not-so-functionally-oriented language such as Lisp, this gets in the way
                      of flexible parameter-list parsing, and doesn't provide that much value.
                      In Lisp, a form's meaning is determined by its first element, hence (f x
                      y) has a meaning determined by F (whether it is a macro, or functionally
                      bound), and Lisp permits such things as "optional", "keyword" (a.k.a. by
                      name) arguments, and ways to obtain the arguments as a list.

                      "f x y", to Lisp, is just three separate forms (all symbols).
                      [color=blue]
                      > Widespread use of currying can lead
                      > to weird error messages when calling functions of many parameters: a
                      > missing third parameter in a call like f(x,y) is easy to report, while
                      > with curried notation, "f x y" is still valid, yet results in a type
                      > other than what you were expecting, moving the error up the AST to a
                      > less useful obvious.[/color]

                      Nah, it should still be able to report the line number correctly.
                      Though I freely admit that the error messages spat out of compilers like
                      SML/NJ are not so wonderful.
                      [color=blue]
                      > I think #9 is inconsistent with #1.[/color]

                      I think that if the parser recognizes that it is directly within a [ ]
                      form, it can figure out that these are not function calls but rather
                      elements, though it would require that function calls be wrapped in (
                      )'s now. And the grammar would be made much more complicated I think.

                      Personally, I prefer (list a (b c d) e f).
                      [color=blue]
                      > In general, I'm wary of notations like "f x" that use whitespace as an
                      > operator (see http://www.research.att.com/~bs/whitespace98.pdf).[/color]

                      Hmm, rather curious paper. I never really though of "f x" using
                      whitespace as an operator--it's a delimiter in the strict sense. The
                      grammar of ML and Haskell define that consecutive expressions form a
                      function application. Lisp certainly uses whitespace as a simple
                      delimiter. I'm not a big fan of required commas because it gets
                      annoying when you are editting large tables or function calls with many
                      parameters. The behavior of Emacs's C-M-t or M-t is not terribly good
                      with extraneous characters like those, though it does try.

                      --
                      ; Matthew Danish <mdanish@andrew .cmu.edu>
                      ; OpenPGP public key: C24B6010 on keyring.debian. org
                      ; Signed or encrypted mail welcome.
                      ; "There is no dark side of the moon really; matter of fact, it's all dark."

                      Comment

                      • Michael Geary

                        #56
                        Re: Python from Wise Guy's Viewpoint

                        > > In general, I'm wary of notations like "f x" that use whitespace as an[color=blue][color=green]
                        > > operator (see http://www.research.att.com/~bs/whitespace98.pdf).[/color][/color]
                        [color=blue]
                        > Hmm, rather curious paper. I never really though of "f x" using
                        > whitespace as an operator--it's a delimiter in the strict sense. The
                        > grammar of ML and Haskell define that consecutive expressions form a
                        > function application. Lisp certainly uses whitespace as a simple
                        > delimiter...[/color]

                        Did you read the cited paper *all the way to the end*?

                        -Mike


                        Comment

                        • Pascal Bourguignon

                          #57
                          Re: Python from Wise Guy's Viewpoint

                          "Andrew Dalke" <adalke@mindspr ing.com> writes:
                          [color=blue]
                          > Pascal Bourguignon:[color=green]
                          > > We all agree that it would be better to have a perfect world and
                          > > perfect, bug-free, software. But since that's not the case, I'm
                          > > saying that instead of having software that behaves like simple unix C
                          > > tools, where as soon as there is an unexpected situation, it calls
                          > > perror() and exit(), it would be better to have smarter software that
                          > > can try and handle UNEXPECTED error situations, including its own
                          > > bugs. I would feel safer in an AI rocket.[/color]
                          >
                          > Since it was written in Ada and not C, and since it properly raised
                          > an exception at that point (as originally designed), which wasn't
                          > caught at a recoverable point, ending up in the default "better blow
                          > up than kill people" handler ... what would your AI rocket have
                          > done with that exception? How does it decide that an UNEXPECTED
                          > error situation can be recovered?[/color]

                          By having a view at the big picture!

                          The blow up action would be activated only when the big picture shows
                          that the AI has no control of the rocket and that it is going down.

                          [color=blue]
                          > How would you implement it?[/color]

                          Like any AI.
                          [color=blue]
                          > How would you test it? (Note that the above software wasn't
                          > tested under realistic conditions; I assume in part because of cost.)[/color]

                          In a simulator. In any case, the point is to have a software that is
                          able to handle even unexpected failures.

                          [color=blue]
                          > I agree it would be better to have software which can do that.
                          > I have no good idea of how that's done. (And bear in mind that
                          > my XEmacs session dies about once a year, eg, once when NFS
                          > was acting flaky underneath it and a couple times because it
                          > couldn't handle something X threw at it. ;)[/color]

                          XEmacs is not AI.
                          [color=blue]
                          > The best examples of resilent architectures I've seen come from
                          > genetic algorithms and other sorts of feedback training; eg,
                          > subsumptive architectures for robotics and evolvable hardware.
                          > There was a great article in CACM on programming an FPGA
                          > via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
                          > I recall) but pointed out the hard part about this approach is
                          > that it's hard to understand, and the result used various defects
                          > on the chip (part of the circuit wasn't used but the chip wouldn't
                          > work without it) which makes the result harder to mass produce.
                          >
                          > Andrew
                          > dalke@dalkescie ntific.com[/color]

                          In any case, you're right, the main problem may be that it was
                          specified to blow up when an unhandled exception was raised...



                          --
                          __Pascal_Bourgu ignon__

                          Do not adjust your mind, there is a fault in reality.
                          Lying for having sex or lying for making war? Trust US presidents :-(

                          Comment

                          • Andrew Dalke

                            #58
                            Re: Python from Wise Guy's Viewpoint

                            Me:[color=blue][color=green]
                            > > How would you test it? (Note that the above software wasn't
                            > > tested under realistic conditions; I assume in part because of cost.)[/color][/color]

                            Pascal Bourguignon:[color=blue]
                            > In a simulator. In any case, the point is to have a software that is
                            > able to handle even unexpected failures.[/color]

                            Like I said, the existing code was not tested in a simulator. Why
                            do you think some AI code *would* be tested for this same case?
                            (Actually, I believe that an AI would need to be trained in a
                            simulator, just like humans, but that it would require so much
                            testing as to preclude its use, for now, in rocket control systems.)

                            Nor have you given any sort of guideline on how to implement
                            this sort of AI in the first place. Without it, you've just restated
                            the dream of many people over the last few centuries. It's a
                            dream I would like to see happen, which is why I agreed with you.
                            [color=blue][color=green]
                            > > couldn't handle something X threw at it. ;)[/color][/color]
                            [color=blue]
                            > XEmacs is not AI[/color]

                            Yup, which is why the smiley is there. You said that C was
                            not the language to use (cf your perror/exit comment) and implied
                            that Ada wasn't either, so I assumed you had a more resiliant
                            programming language in mind. My response was to point
                            out that Emacs Lisp also crashes (rarely) given unexpected
                            errors and so imply that Lisp is not the answer.

                            Truely I believe that programming languages as we know
                            them are not the (direct) solution, hence my pointers to
                            evolvable hardware and similar techniques.

                            Even then, we still have a long way to go before they
                            can be used to control a rocket. They require a lot of
                            training (just like people) and software simulators just
                            won't cut it. The first "AI"s will replace those things
                            we find simple and commonplace[*] (because our brain
                            evolved to handle it), and not hard and rare.

                            Andrew
                            dalke@dalkescie ntific.com[*]
                            In thinking of some examples, I remembered a passage in
                            on of Cordwainer Smith's stories. In them, dogs, cats,
                            eagles, cows, and many other animals were artifically
                            endowed with intelligence and a human-like shape.
                            Turtles were bred for tasks which required long patience.
                            For example, one turtle was assigned the task of standing
                            by a door in case there was trouble, which he did for
                            100 years, without complaint.


                            Comment

                            • Dennis Lee Bieber

                              #59
                              Re: Python from Wise Guy's Viewpoint

                              Andrew Dalke fed this fish to the penguins on Monday 20 October 2003
                              21:41 pm:

                              [color=blue]
                              > For example, one turtle was assigned the task of standing
                              > by a door in case there was trouble, which he did for
                              > 100 years, without complaint.
                              >[/color]
                              I do hope he was allowed time-out for the occassional lettuce leaf or
                              other veggies... <G>

                              --[color=blue]
                              > =============== =============== =============== =============== == <
                              > wlfraed@ix.netc om.com | Wulfraed Dennis Lee Bieber KD6MOG <
                              > wulfraed@dm.net | Bestiaria Support Staff <
                              > =============== =============== =============== =============== == <
                              > Bestiaria Home Page: http://www.beastie.dm.net/ <
                              > Home Page: http://www.dm.net/~wulfraed/ <[/color]

                              Comment

                              • Matthew Danish

                                #60
                                Re: Python from Wise Guy's Viewpoint

                                On Mon, Oct 20, 2003 at 07:27:49PM -0700, Michael Geary wrote:[color=blue][color=green][color=darkred]
                                > > > In general, I'm wary of notations like "f x" that use whitespace as an
                                > > > operator (see http://www.research.att.com/~bs/whitespace98.pdf).[/color][/color]
                                >[color=green]
                                > > Hmm, rather curious paper. I never really though of "f x" using
                                > > whitespace as an operator--it's a delimiter in the strict sense. The
                                > > grammar of ML and Haskell define that consecutive expressions form a
                                > > function application. Lisp certainly uses whitespace as a simple
                                > > delimiter...[/color]
                                >
                                > Did you read the cited paper *all the way to the end*?[/color]

                                Why bother? It says "April 1" in the Abstract, and got boring about 2
                                paragraphs later. I should have scare-quoted "operator" above, or
                                rather the lack of one, which is interpreted as meaning function
                                application.

                                --
                                ; Matthew Danish <mdanish@andrew .cmu.edu>
                                ; OpenPGP public key: C24B6010 on keyring.debian. org
                                ; Signed or encrypted mail welcome.
                                ; "There is no dark side of the moon really; matter of fact, it's all dark."

                                Comment

                                Working...