c# 2004 (Whidbey)

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Mark Pearce

    #31
    Re: c# 2004 (Whidbey)

    Hi Jon,

    I quite agree that there is no silver bullet. Unit tests on their own,
    whether manual or automated, won't find all of your bugs. Nor will stepping
    through your code, or performing code inspections. Each of these techniques
    on their own will find a specific set of bugs, but only by combining all of
    them together can you have some confidence in your code.

    My criticism is that Niall's attitude puts all the emphasis on unit tests
    backed by code coverage, and this simply doesn't work in isolation.

    Here are a couple of papers that I found useful in clarifying my thinking
    when doing research for my book. The first looks at subtleties in using code
    coverage tools, the second looks at problems with unit test automation.




    Mark
    --
    Author of "Comprehens ive VB .NET Debugging"



    "Jon Skeet" <skeet@pobox.co m> wrote in message
    news:MPG.19a6a4 357ca4ed9a98a31 4@news.microsof t.com...
    Mark Pearce <evil@bay.com > wrote:[color=blue]
    > In addition, code coverage tools used to verify your unit tests are not[/color]
    very[color=blue]
    > useful. They can't identify what the code should be doing, or what it[/color]
    isn't[color=blue]
    > doing. But developers tend to use these tools to say "But there can't be a
    > bug: my unit tests are infallible and my code coverage tool verified that
    > all paths have been executed".[/color]

    This is a straw man. Clearly anyone who, when presented with a problem
    says that it can't be in their code because their unit tests are
    infallible is crazy. I could present the other straw man, the developer
    who says "But there can't be a bug: I've walked through all the code
    and it did what it should."

    When presented with a problem, you verify that it really *is* a
    problem, write a unit test which shows that it's a problem (and check
    that that unit test fails on the current code), then fix the code and
    check that the unit test now passes.

    --
    Jon Skeet - <skeet@pobox.co m>
    Pobox has been discontinued as a separate service, and all existing customers moved to the Fastmail platform.

    If replying to the group, please do not mail me too


    Comment

    • Alvin Bruney

      #32
      Re: c# 2004 (Whidbey)

      You know Harry, this is really true. I have a C++ background and if I can't
      step thru that sucker to see what's going on, I get spooked because so many
      things can go wrong in C++ and not show it's butt in a unit test. I think
      that's probably less of a problem in .NET but it's a habit for me that i
      can't shake off. Just today this guy thought my code had a bug, while he was
      explaining i was stepping with my debugger (not really listening to him
      while he was yapping ha ha) i was able to determine a little before he was
      finished yapping that he was doing something wrong. what can i say, old
      habits are as bad as good habits. i'll admit i have terrible habits.


      "Harry Bosch" <none@given.com > wrote in message
      news:Xns93D856D 1519BFhboschsbc @207.46.248.16. ..[color=blue]
      > "Niall" <asdf@me.com> wrote:
      >[color=green]
      > > I know what you mean. I'm not saying that the debugger is useless or a
      > > sin, I just don't agree that it should be entrenched as "required
      > > practice" in development, which is not what you're saying anyway (I
      > > think). Stepping through the code can be helpful to see if you've
      > > designed your code badly, but I don't think it's a requirement. If
      > > unit tests show that the program does what it's supposed to, and its
      > > performance is acceptable, then you can release it to the client
      > > without ever needing to step through the code.[/color]
      >
      > I often step through newly written code, to see if it's doing what I[/color]
      expect[color=blue]
      > (or hope :-). This was more of a standard practice for me when I was[/color]
      doing[color=blue]
      > C/C++, because often the API docs are unclear or insufficient, and you're
      > not entirely sure what you're getting back unless you step through and
      > actually look at it (the DWORD, the buffer, etc.). I do this far less now
      > in .NET, almost more of an old habit than a need, but it does point out
      > that in the unmanaged C++ world your certainty on the correctness of your
      > code is low.
      >
      > --
      > harry[/color]


      Comment

      • Jon Skeet

        #33
        Re: c# 2004 (Whidbey)

        Alvin Bruney <vapordan_spam_ me_not@hotmail_ no_spamhotmail. com> wrote:[color=blue]
        > You know Harry, this is really true. I have a C++ background and if I can't
        > step thru that sucker to see what's going on, I get spooked because so many
        > things can go wrong in C++ and not show it's butt in a unit test. I think
        > that's probably less of a problem in .NET but it's a habit for me that i
        > can't shake off.[/color]

        That makes a lot more sense. One of the things I was thinking about
        when reading this thread is that if your code is sufficiently complex
        that just looking at it, it isn't crystal clear what's going on (so
        that peer review can consist just of looking at the code) then it
        should almost certainly be simplified. The mark of a great engineer
        isn't that he produces complex code which a less able engineer can't
        understand, but that he can see his way clear to solving a complex
        problem by writing code that everyone can understand :)

        Of course, the various bugs in the VS.NET 2002 debugger make me
        somewhat more wary of stepping through code as well... I tend to regard
        stepping through the code in a debugger as a last resort as it only
        shows you what's happening *this* time. I'd rather take time to look at
        the whole code of a sequence and understand it in a more global sense,
        and then work out why it's not behaving as it should.

        --
        Jon Skeet - <skeet@pobox.co m>
        Pobox has been discontinued as a separate service, and all existing customers moved to the Fastmail platform.

        If replying to the group, please do not mail me too

        Comment

        • Harry Bosch

          #34
          Re: c# 2004 (Whidbey)

          Jon Skeet <skeet@pobox.co m> wrote:
          [color=blue]
          > That makes a lot more sense. One of the things I was thinking about
          > when reading this thread is that if your code is sufficiently complex
          > that just looking at it, it isn't crystal clear what's going on (so
          > that peer review can consist just of looking at the code) then it
          > should almost certainly be simplified. The mark of a great engineer
          > isn't that he produces complex code which a less able engineer can't
          > understand, but that he can see his way clear to solving a complex
          > problem by writing code that everyone can understand :)[/color]

          You hit the nail on the head. The point about being able to just look at
          the code and see that it is (or is not) correct is the goal of code
          clarity and simplification. And if you can write code that others can
          understand clearly, you're that much better at it.

          One thing I always hated when reviewing C++ code is those horrible string
          loops some people loved to write. You know the kind, where it's a mass of
          pointer variables with nested and's and or's, pre-/post-
          increment/decrement operators, and inline assignments, all shoved into as
          little space as possible. And most of the time, there was something in
          the standard library that would do what the messy code was attempting, or
          at least something that could simplify it. You can't look at code like
          that and know if it is correct or not, you have to trace through it and
          track the pointer values as you go, and check for ALL of the null
          pointer, past-the-end, and off-by-one errors. And the programmer would
          justify it as being "efficient" :-) Things like that belong in a library,
          written and debugged once (and optimized, if deemed necessary). And the
          standard classes and libraries have most of this already.

          As a funny aside, when I interviewed at Microsoft I was asked to write
          some super-efficient string handling code at the whiteboard. I had to
          laugh (to myself, of course :-) -- of all the things I could have been
          asked about! OTOH, I also had some fascinating discussions with some of
          the PM's, who were quite sharp and knowledgeable, so it was not all
          inappropriate.

          --
          harry

          Comment

          • Niall

            #35
            Re: c# 2004 (Whidbey)

            There's not too much in here that I haven't already said before in this
            thread, so I'll try to be brief (this time, for once :P)

            "Mark Pearce" <evil@bay.com > wrote in message
            news:OKGz2EwYDH A.2592@TK2MSFTN GP09.phx.gbl...[color=blue]
            > Hi Niall,
            >[color=green][color=darkred]
            > >> If unit tests show that the program does what it's supposed to, and its[/color][/color]
            > performance is acceptable, then you can release it to the client without
            > ever needing to step through the code. <<
            >
            > There are so many things wrong with this attitude that I don't know where[/color]
            to[color=blue]
            > start debugging your thought processes![/color]

            I'm surprised it took so long for someone to come out of the woodwork and
            say something like this :P

            [color=blue]
            > Unit tests can only test your current thinking about what your code should
            > be doing. Most developers can virtually guarantee that the first ideas[/color]
            about[color=blue]
            > what their code should be doing is faulty. One of the major tools for
            > correcting your ideas is to walk through your code in a debugger to watch
            > the code flow and data flow. Relying on unit tests is just intellectual
            > laziness, a prop for faulty thinking.[/color]

            Unit tests are evolved as your requirements are evolved. Just as unit tests
            represent your current thinking, so does your step through debugging. If you
            don't know that your function should behave in X manner yet, you won't write
            a unit test to ensure it does, and you won't realise it's not doing that
            when you step through your code. In most cases, anything you are
            subconsciously checking as you step through your code can and should become
            an explicit test. If you do this, you make sure everyone has the same ideas
            about what the code should be doing.

            The only time I think this can be a problem is if you're dealing with API
            stuff, it might be pretty difficult to test what's going on outside of your
            control.

            [color=blue]
            > In addition, code coverage tools used to verify your unit tests are not[/color]
            very[color=blue]
            > useful. They can't identify what the code should be doing, or what it[/color]
            isn't[color=blue]
            > doing. But developers tend to use these tools to say "But there can't be a
            > bug: my unit tests are infallible and my code coverage tool verified that
            > all paths have been executed".[/color]

            But the tools can be examined. If someone has some buggy code, and has a
            unit test which should have picked that up, you can look at what happened
            and work things out, improve the test. If someone has missed some buggy
            functionality in a step through debugging, then by the time the bug is
            discovered, no one has any idea why the bug wasn't originally caught. At
            least with unit tests, you can improve their usage through examining
            mistakes.

            As you say, there's no silver bullet. But I still contend that it's a rare
            case where you can pick something up with a debugger that you wouldn't have
            picked up if you correctly used unit tests. And (as I've said heaps of times
            before), unit tests have a lot of other advantages apart from just ensuring
            that code is correct.

            You say that relying on unit tests is laziness. At my company, not writing
            the tests is laziness. It's easy to write some code, have a step through it,
            bring up the form that uses it and toy around with it for a bit. But if you
            don't stop to document what behaviour you want and how to push the code and
            make sure it works properly, then only the original programmer benefits from
            that testing, and they only benefit the once - when they do the test. After
            that, there's no persisting benefit of the debug step through. I don't have
            some kind of a rule saying I won't use the debugger, I just don't use it to
            replace a unit test.

            Man, if I could have written this much so easily at high school, I would
            have scored much better in those english essays..

            Niall


            Comment

            • Jeff Ogata

              #36
              Re: c# 2004 (Whidbey)

              Hi Niall,

              I'm actually not going to get in between you and Mark here ;), I just had a
              couple of quick questions on getting started with unit testing.

              Can you point me to any resources that will give me quick overview of unit
              testing? Are there any tools that you recommend? I've briefly looked at
              NUnit -- is that the way to go?

              Thanks,
              Jeff

              "Niall" <asdf@me.com> wrote in message
              news:uSjXksuZDH A.1940@TK2MSFTN GP10.phx.gbl...[color=blue][color=green][color=darkred]
              > > >> Just as unit tests represent your current thinking, so does your step[/color]
              > > through debugging. <<
              > >
              > > On the contrary, stepping through code shows you what's *actually*
              > > happening, not what you think is happening, or what you think should be
              > > happening. This is a crucial distinction, and what makes stepping[/color][/color]
              through[color=blue][color=green]
              > > code such a powerful technique.[/color]
              >
              > What I meant was this: you can only validate the workings of a function
              > based on what you currently think it should be doing. If a function is
              > giving incorrect output, but you don't know that it's incorrect, then you
              > won't notice when you are stepping through the code. If a square root
              > funciton is giving you 3 when you pass it 16, then you have to know that
              > you're expecting 4 before you will realise it's behaving incorrectly. So
              > what I'm saying is that there's very little different between stepping
              > through your code with the number 4 in your head than writing a unit test
              > which calls the function and compares the result to 4. The advantage of[/color]
              the[color=blue]
              > latter one is that with the unit test, you've documented the expected
              > behaviour and ensured that if, at any time, the function stops meeting[/color]
              that[color=blue]
              > requirement, it will be known instantly, without waiting for the next[/color]
              person[color=blue]
              > to step through the code.
              >
              >[color=green][color=darkred]
              > > >> The only time I think this can be a problem is if you're dealing with[/color][/color]
              > API[color=green]
              > > stuff, it might be pretty difficult to test what's going on outside of[/color]
              > your[color=green]
              > > control. <<
              > >
              > > Well, increasingly nowadays with component software, and especially with
              > > .NET, the majority of all commercial programming involves the extensive[/color]
              > use[color=green]
              > > of APIs. The .NET Framework itself is just one huge set of APIs. And[/color][/color]
              every[color=blue][color=green]
              > > time you use a third-party control library, you're making dozens of
              > > assumptions about the behaviour of that control. The same goes with any
              > > other third-party library.[/color]
              >
              > By APIs, I was referring more to calls that go outside your environment.
              > Sure, .Net is a bunch of APIs, but .Net is your environment, and it's very
              > easy to look at the values of .Net types in a unit test and ensure they[/color]
              are[color=blue]
              > being set correctly by your code.
              >
              >[color=green][color=darkred]
              > > >> But I still contend that it's a rare case where you can pick[/color][/color][/color]
              something[color=blue]
              > up[color=green]
              > > with a debugger that you wouldn't have picked up if you correctly used[/color]
              > unit[color=green]
              > > tests. <<
              > >
              > > But that's a self-fulfilling prophecy, because you've already stated[/color][/color]
              that[color=blue][color=green]
              > > you don't often step through code. Try, for just one week, to step[/color][/color]
              through[color=blue][color=green]
              > > every piece of code as you write it. Be intellectually honest during[/color][/color]
              that[color=blue][color=green]
              > > week, and try to identify where this technique is helping you to find
              > > design, construction, and testing bugs.[/color]
              >
              > From this, I think you are using stepping through the code to pick up more
              > than false results. Unit testing is a good tool to help you design your[/color]
              code[color=blue]
              > to meet the requirements/expected behaviour, but it's not a tool that's
              > meant to be used to design application architectecture , etc, and I haven't
              > claimed that it is. The point of unit testing is to pin the program to the
              > requirements, because the requirements are the most important thing,
              > otherwise why are you writing the program. The advantage of this approach[/color]
              is[color=blue]
              > that you can change your design however you want, as long as the tests are
              > passing and you've written them properly, then your program should still[/color]
              be[color=blue]
              > meeting the requirements. So when you talk about design and construction
              > bugs, unit testing isn't aimed at these situations.
              >
              > People seem to be confused about unit testing thinking it either means
              > automated scripts that are executed by some GUI testing framework
              > application, testing things like "Did the control paint itself?" etc, or
              > thinking that you just call your code and if there's no exception, then
              > all's dandy. That Test Automation Snake Oil paper you posted the link to[/color]
              was[color=blue]
              > talking about GUI test scripts, not unit testing, for example.
              >
              > Unit testing is under the hood testing, as is step through debugging. What[/color]
              I[color=blue]
              > am talking about is a specific test condition for each and every expected
              > behaviour of the function. Test condition and the expected behaviour are[/color]
              two[color=blue]
              > angles on the same thing. This is why I don't see how I could pick up
              > incorrect results of a function from stepping through but not from a unit
              > test, because every single thing I know the function should be doing gets
              > put into the unit test. Hence, any remaining bugs are bugs I wouldn't
              > recognise anyway, so what is stepping through the code going to do for me?
              >
              > As I've said before, I do use the debugger to step through code, but not[/color]
              to[color=blue]
              > ensure code correctness, and hence it's not a religious requirement for me
              > to step through every line I write. It is, however, a requirement for me[/color]
              to[color=blue]
              > test what I write.
              >
              >[color=green]
              > > I find several of these bugs every day by stepping through my code and
              > > through code belonging to other developers. It's a very powerful[/color][/color]
              debugging[color=blue][color=green]
              > > technique, but of course should be used in addition to your manual and
              > > automated unit tests. You should be using every tool in your debugging
              > > arsenal, rather than just cherry-picking the ones that you believe[/color]
              > (possibly[color=green]
              > > mistakenly) to give the most bang-per-buck.[/color]
              >
              > I do use a range of tools, but I believe that for testing code[/color]
              correctness,[color=blue]
              > unit testing is more thorough, faster, and more communicative to the other
              > developers than stepping through the function. If you write a function,[/color]
              step[color=blue]
              > through it and decide it's working ok, but the next person changes the[/color]
              code[color=blue]
              > a little, you need to make sure they are checking for exactly the same
              > things as you were when you stepped through. Otherwise, there is the
              > possibility they can break something without realising it. If you document
              > all the requirements of the function in a unit test, then you force the[/color]
              next[color=blue]
              > person to come along to adhere to your contract.
              >
              > Maybe I'm missing something about the way you step through your code. How,
              > exactly, do you verify that it's correct? I'm not talking about design or
              > performance, just that the results of the function are what they should[/color]
              be,[color=blue]
              > given the inputs/state of the object, etc. If I was doing it, I would be
              > stepping through the code, looking at values of the results of[/color]
              calculations,[color=blue]
              > ensuring that the right branches in conditional statements were being[/color]
              taken,[color=blue]
              > exceptions were being thrown when they should be, this kind of thing. All
              > this you can do in a unit test. What more do you check for, and how do you
              > do it?
              >
              > Niall
              >
              >[/color]


              Comment

              • Niall

                #37
                Re: c# 2004 (Whidbey)

                I don't know too many sites talking about unit testing, but it gets lots of
                hits on google.

                There is: http://www.extremeprogramming.org/rules/unittests.html which is
                fairly generalistic, but it gives links to other things on the site related
                to the unit tests and how they can be used.

                There is also http://www.c2.com/cgi/wiki?UnitTest. On this site, they talk
                about XP style unit tests being called "Programmer Tests", though I have
                never heard them referred to as such before. There is heaps of information
                on lots of things on this site, though I find I can wander for a long time
                through all the links.

                This is the first job I've had where unit tests are used, so I've only used
                one framework, which is NUnit. We actually run on the older version of NUnit
                because we've been using it for about 2 years, and we've modified the code
                (it's open source, which is useful) significantly to meet our needs. As a
                result, migrating to the newer NUnit would be a decent bit of work, and what
                we have now does the trick. The new NUnit has a few features which are quite
                handy, like the ability to unload the assembly while the test rig is
                running, so you can run your test, go fix the code and run again without
                having to restart the test app. So yeah, NUnit is pretty good. There may be
                better ones out there, I'm not really sure.

                Niall


                "Jeff Ogata" <jogata@eatmysp am.com> wrote in message
                news:e$d0t0uZDH A.2308@TK2MSFTN GP12.phx.gbl...[color=blue]
                > Hi Niall,
                >
                > I'm actually not going to get in between you and Mark here ;), I just had[/color]
                a[color=blue]
                > couple of quick questions on getting started with unit testing.
                >
                > Can you point me to any resources that will give me quick overview of unit
                > testing? Are there any tools that you recommend? I've briefly looked at
                > NUnit -- is that the way to go?
                >
                > Thanks,
                > Jeff[/color]


                Comment

                • Niall

                  #38
                  Re: c# 2004 (Whidbey)

                  > This is perhaps the crux of where we differ - your attitude seems to be
                  that[color=blue]
                  > your main responsibility is to ensure that the code you're testing meets[/color]
                  the[color=blue]
                  > stated requirements. In my case, that's the very least of what I'm looking
                  > for - indeed, I am usually surprised if my code has this type of bug, so I
                  > don't tend to spend the majority of my time looking for it.
                  >
                  > Instead, the majority of my time is spent looking for omissions or[/color]
                  mistakes[color=blue]
                  > in the requirements, and for design bugs and implementation mistakes.
                  > Stepping though my code is one essential technique here, and so is the use
                  > of code reviews. Studies show that each of these techniques finds a
                  > different set of bugs.[/color]

                  What do you mean by implementation mistakes? Do you mean mistakes such that
                  the code doesn't do what it's supposed to do, or mistakes such as slower
                  code, messy code, etc? What I'm wondering is how you can be sure that your
                  code fully meets the requirements if you don't test it against the
                  requirements?

                  I agree that bad design, slower code isn't really the domain of unit
                  testing. With the performance thing, if a particular function has to run in
                  less than a certain amount of time, you can always write a test that fails
                  if it takes longer. I guess the attitude of the unit test is to give you the
                  most important thing - a program which does what the customer wants. At the
                  end of the day, the customer doesn't care if you have spaghetti code, as
                  long as the program works. On the other hand, if you have a great design,
                  but your program doesn't do what they want, then they won't be pleased.

                  I'm not advocating a mindset of "It works, meets the requirements fully, so
                  lock it away and it doesn't matter if it has a bad design." In fact, it's a
                  bit the opposite. Once you have the unit tests solidly down, then you know
                  your code meets the requirements, and you know that if something breaks, you
                  will find out about it. So at any time, anyone can come along and change the
                  code to what they think is a better design, faster, etc. So the unit test
                  doesn't test design, but it gives you a safeguard to facilitate design
                  changes.

                  [color=blue]
                  > To their surprise, the breakpoint was never hit and the test completed
                  > successfully. A quick investigation with the debugger showed that a[/color]
                  function[color=blue]
                  > a few steps up the call chain had an optimisation that allowed it[/color]
                  sometimes[color=blue]
                  > to skip unnecessary work. In this case, it skipped the new code. Treating
                  > the code as a black box in this case just didn't work - the developer put[/color]
                  in[color=blue]
                  > some inputs, got correct outputs and entirely missed that fact that his[/color]
                  new[color=blue]
                  > code wasn't being executed at all.[/color]

                  This isn't the way I write unit tests, and as far as I've seen, it's not the
                  way that they're supposed to be written, either. The unit test is supposed
                  to isolate and target specific areas of code. So there should be a test that
                  specifically targets the function in question, ignoring the optimisation. As
                  for the optimisation in the other method - there should be a unit test that
                  specifically targets that as well. Unit tests which test at a higher level
                  are ok too, and probably a good idea to watch the behaviour of the object at
                  a higher level as well, but like I said before, unit testing is under the
                  hood testing.

                  [color=blue]
                  > Here's another example, closer to home. The following code shows a nasty
                  > bug:
                  >
                  > bool AccessGranted = true;
                  >
                  > try
                  > {
                  > // See if we have access to c:\test.txt
                  > new FileStream(@"c: \test.txt",
                  > FileMode.Open,
                  > FileAccess.Read ).Close();
                  > }
                  > catch (SecurityExcept ion x)
                  > {
                  > // access denied
                  > AccessGranted = false;
                  > }
                  > catch (...)
                  > {
                  > // something else happened
                  > }
                  >
                  > If the CLR grants access to the test file in this example, everything[/color]
                  works[color=blue]
                  > fine. If the CLR denies access, everything works fine as well because a
                  > SecurityExcepti on is thrown. But what happens if the discretionary access
                  > control list (DACL) on the file doesn't grant access? Then a different
                  > exception is thrown, but AccessGranted will return true because of the
                  > optimistic assumption made on the first line of code. The bug was really[/color]
                  in[color=blue]
                  > the requirements as well as in the code, because they didn't state what
                  > should happen if the CLR granted access, but the file system didn't.
                  > Stepping through this code would have shown that a completely different
                  > exception was being thrown when the DACL denied access, and therefore[/color]
                  would[color=blue]
                  > have pointed to the omission in requirements as well as finding the bug.[/color]

                  I think this could have been coded better. In general, when you have code
                  that is trying to discover if you can do a certain thing, it should presume
                  the negative in the beginning. Especially for things like "Do I have the
                  security rights to do <xyz>" it should always be false in the beginning.
                  That way you ensure that the only time you ever think you have the right to
                  do the action is when the code that recogises the affirmative is run.

                  I'm not sure if you mean the base Exception type by the "..." in your code.
                  I wouldn't write code that catches just a plain exception unless it was
                  planning to wrap the exception in a more meaningful one and re-throw. If you
                  didn't mean to catch the base Exception type there, then the DACL exception
                  wouldn't be caught, and the unit test would show an error. This is the way
                  NUnit behaves - if an assertion fails, you get a test failure, if an
                  exception escapes from the test, you get a test error.

                  Personally, I would have written the function like:

                  bool AccessGranted = false;
                  try
                  {
                  new FileStream(@"c: \test.txt",
                  FileMode.Open,
                  FileAccess.Read ).Close();
                  AccessGranted = true;
                  }
                  catch (SecurityExcept ion)
                  {
                  }

                  But, of course, an even more powerful tool than unit testing is hindsight...

                  Niall


                  Comment

                  • Niall

                    #39
                    Re: c# 2004 (Whidbey)

                    > This is perhaps the crux of where we differ - your attitude seems to be
                    that[color=blue]
                    > your main responsibility is to ensure that the code you're testing meets[/color]
                    the[color=blue]
                    > stated requirements. In my case, that's the very least of what I'm looking
                    > for - indeed, I am usually surprised if my code has this type of bug, so I
                    > don't tend to spend the majority of my time looking for it.
                    >
                    > Instead, the majority of my time is spent looking for omissions or[/color]
                    mistakes[color=blue]
                    > in the requirements, and for design bugs and implementation mistakes.
                    > Stepping though my code is one essential technique here, and so is the use
                    > of code reviews. Studies show that each of these techniques finds a
                    > different set of bugs.[/color]

                    What do you mean by implementation mistakes? Do you mean mistakes such that
                    the code doesn't do what it's supposed to do, or mistakes such as slower
                    code, messy code, etc? What I'm wondering is how you can be sure that your
                    code fully meets the requirements if you don't test it against the
                    requirements?

                    I agree that bad design, slower code isn't really the domain of unit
                    testing. With the performance thing, if a particular function has to run in
                    less than a certain amount of time, you can always write a test that fails
                    if it takes longer. I guess the attitude of the unit test is to give you the
                    most important thing - a program which does what the customer wants. At the
                    end of the day, the customer doesn't care if you have spaghetti code, as
                    long as the program works. On the other hand, if you have a great design,
                    but your program doesn't do what they want, then they won't be pleased.

                    I'm not advocating a mindset of "It works, meets the requirements fully, so
                    lock it away and it doesn't matter if it has a bad design." In fact, it's a
                    bit the opposite. Once you have the unit tests solidly down, then you know
                    your code meets the requirements, and you know that if something breaks, you
                    will find out about it. So at any time, anyone can come along and change the
                    code to what they think is a better design, faster, etc. So the unit test
                    doesn't test design, but it gives you a safeguard to facilitate design
                    changes.

                    [color=blue]
                    > To their surprise, the breakpoint was never hit and the test completed
                    > successfully. A quick investigation with the debugger showed that a[/color]
                    function[color=blue]
                    > a few steps up the call chain had an optimisation that allowed it[/color]
                    sometimes[color=blue]
                    > to skip unnecessary work. In this case, it skipped the new code. Treating
                    > the code as a black box in this case just didn't work - the developer put[/color]
                    in[color=blue]
                    > some inputs, got correct outputs and entirely missed that fact that his[/color]
                    new[color=blue]
                    > code wasn't being executed at all.[/color]

                    This isn't the way I write unit tests, and as far as I've seen, it's not the
                    way that they're supposed to be written, either. The unit test is supposed
                    to isolate and target specific areas of code. So there should be a test that
                    specifically targets the function in question, ignoring the optimisation. As
                    for the optimisation in the other method - there should be a unit test that
                    specifically targets that as well. Unit tests which test at a higher level
                    are ok too, and probably a good idea to watch the behaviour of the object at
                    a higher level as well, but like I said before, unit testing is under the
                    hood testing.

                    [color=blue]
                    > Here's another example, closer to home. The following code shows a nasty
                    > bug:
                    >
                    > bool AccessGranted = true;
                    >
                    > try
                    > {
                    > // See if we have access to c:\test.txt
                    > new FileStream(@"c: \test.txt",
                    > FileMode.Open,
                    > FileAccess.Read ).Close();
                    > }
                    > catch (SecurityExcept ion x)
                    > {
                    > // access denied
                    > AccessGranted = false;
                    > }
                    > catch (...)
                    > {
                    > // something else happened
                    > }
                    >
                    > If the CLR grants access to the test file in this example, everything[/color]
                    works[color=blue]
                    > fine. If the CLR denies access, everything works fine as well because a
                    > SecurityExcepti on is thrown. But what happens if the discretionary access
                    > control list (DACL) on the file doesn't grant access? Then a different
                    > exception is thrown, but AccessGranted will return true because of the
                    > optimistic assumption made on the first line of code. The bug was really[/color]
                    in[color=blue]
                    > the requirements as well as in the code, because they didn't state what
                    > should happen if the CLR granted access, but the file system didn't.
                    > Stepping through this code would have shown that a completely different
                    > exception was being thrown when the DACL denied access, and therefore[/color]
                    would[color=blue]
                    > have pointed to the omission in requirements as well as finding the bug.[/color]

                    I think this could have been coded better. In general, when you have code
                    that is trying to discover if you can do a certain thing, it should presume
                    the negative in the beginning. Especially for things like "Do I have the
                    security rights to do <xyz>" it should always be false in the beginning.
                    That way you ensure that the only time you ever think you have the right to
                    do the action is when the code that recogises the affirmative is run.

                    I'm not sure if you mean the base Exception type by the "..." in your code.
                    I wouldn't write code that catches just a plain exception unless it was
                    planning to wrap the exception in a more meaningful one and re-throw. If you
                    didn't mean to catch the base Exception type there, then the DACL exception
                    wouldn't be caught, and the unit test would show an error. This is the way
                    NUnit behaves - if an assertion fails, you get a test failure, if an
                    exception escapes from the test, you get a test error.

                    Personally, I would have written the function like:

                    bool AccessGranted = false;
                    try
                    {
                    new FileStream(@"c: \test.txt",
                    FileMode.Open,
                    FileAccess.Read ).Close();
                    AccessGranted = true;
                    }
                    catch (SecurityExcept ion)
                    {
                    }

                    But, of course, an even more powerful tool than unit testing is hindsight...

                    Niall


                    Comment

                    • Alvin Bruney

                      #40
                      Re: c# 2004 (Whidbey)

                      > most important thing - a program which does what the customer wants. At
                      the[color=blue]
                      > end of the day, the customer doesn't care if you have spaghetti code, as
                      > long as the program works.[/color]

                      that stopped being true years ago. customers now want their source code, or
                      even their inhouse programmers to look at the code. they are getting wiser
                      about having it done properly in the first place so high priced programmers
                      don't bleed them dry trying to make bread out sphaghetti. spahgetti.
                      spahghetti. i know it doesn't look right, i just can't remember how to spell
                      it.


                      Comment

                      • Alvin Bruney

                        #41
                        Re: c# 2004 (Whidbey)

                        > most important thing - a program which does what the customer wants. At
                        the[color=blue]
                        > end of the day, the customer doesn't care if you have spaghetti code, as
                        > long as the program works.[/color]

                        that stopped being true years ago. customers now want their source code, or
                        even their inhouse programmers to look at the code. they are getting wiser
                        about having it done properly in the first place so high priced programmers
                        don't bleed them dry trying to make bread out sphaghetti. spahgetti.
                        spahghetti. i know it doesn't look right, i just can't remember how to spell
                        it.


                        Comment

                        • Mark Pearce

                          #42
                          Re: c# 2004 (Whidbey)

                          Hi Niall,
                          [color=blue][color=green]
                          >> What I'm wondering is how you can be sure that your code fully meets the[/color][/color]
                          requirements if you don't test it against the requirements? <<

                          Of course I test against requirements, but for me, that's just the first
                          step. After that, I start testing to find requirements bugs, design bugs,
                          implementation bugs, etc.
                          [color=blue][color=green]
                          >> I guess the attitude of the unit test is to give you the most important[/color][/color]
                          thing - a program which does what the customer wants. At the end of the day,
                          the customer doesn't care if you have spaghetti code, as long as the program
                          works. <<

                          If you're doing custom software development, your customer certainly cares
                          deeply about the internal quality of the code. The customer's staff will
                          have to debug, maintain and enhance the code for months and years to come.
                          If you're doing product development, the end-customers may not care about
                          the internal quality of the code, but the company for which you're
                          developing the product certainly cares. Once again, the company will have to
                          debug, maintain and enhance the product for a long time.

                          So somebody, somewhere, *always* cares about the code's internal quality.
                          [color=blue][color=green]
                          >> So the unit test doesn't test design, but it gives you a safeguard to[/color][/color]
                          facilitate design changes. <<

                          Here at least we agree on something!
                          [color=blue][color=green]
                          >> The unit test is supposed to isolate and target specific areas of code.[/color][/color]
                          <<

                          The developer in question thought that he *was* targeting a specific area of
                          code. He didn't know about the optimisation, and indeed had no way of
                          knowing about the optimisation without stepping through the code in a
                          source-level debugger.
                          [color=blue][color=green]
                          >> I think this could have been coded better. <<[/color][/color]

                          Of course in hindsight, it should have been coded better. But you've just
                          been arguing that the internal code quality doesn't really matter, as long
                          as the unit tests are satisfied. You can't have it both ways.
                          [color=blue][color=green]
                          >> I wouldn't write code that catches just a plain exception unless it was[/color][/color]
                          planning to wrap the exception in a more meaningful one and re-throw. <<

                          There are various reasons to catch System.Exceptio n, including the one you
                          mentioned. Another reason is to reverse a transaction after any exception.
                          Yet another reason is that many developers often put in a System.Exceptio n
                          catch during testing, and forget to remove it. And yet another reason is
                          that developers sometimes don't realise that you shouldn't catch
                          System.Exceptio n without rethrowing it. Finally, a few of the .NET base
                          class methods suppress all exceptions for "security" reasons, and just fail
                          silently.

                          Whatever the reason, you won't find this type of requirements/coding bug
                          unless you step through the code and find that an unexpected exception type
                          was being silently thrown and caught.

                          Regards,

                          Mark
                          ----
                          Author of "Comprehens ive VB .NET Debugging"



                          "Niall" <asdf@me.com> wrote in message
                          news:uwNhs7GaDH A.1492@TK2MSFTN GP12.phx.gbl...[color=blue]
                          > This is perhaps the crux of where we differ - your attitude seems to be[/color]
                          that[color=blue]
                          > your main responsibility is to ensure that the code you're testing meets[/color]
                          the[color=blue]
                          > stated requirements. In my case, that's the very least of what I'm looking
                          > for - indeed, I am usually surprised if my code has this type of bug, so I
                          > don't tend to spend the majority of my time looking for it.
                          >
                          > Instead, the majority of my time is spent looking for omissions or[/color]
                          mistakes[color=blue]
                          > in the requirements, and for design bugs and implementation mistakes.
                          > Stepping though my code is one essential technique here, and so is the use
                          > of code reviews. Studies show that each of these techniques finds a
                          > different set of bugs.[/color]

                          What do you mean by implementation mistakes? Do you mean mistakes such that
                          the code doesn't do what it's supposed to do, or mistakes such as slower
                          code, messy code, etc? What I'm wondering is how you can be sure that your
                          code fully meets the requirements if you don't test it against the
                          requirements?

                          I agree that bad design, slower code isn't really the domain of unit
                          testing. With the performance thing, if a particular function has to run in
                          less than a certain amount of time, you can always write a test that fails
                          if it takes longer. I guess the attitude of the unit test is to give you the
                          most important thing - a program which does what the customer wants. At the
                          end of the day, the customer doesn't care if you have spaghetti code, as
                          long as the program works. On the other hand, if you have a great design,
                          but your program doesn't do what they want, then they won't be pleased.

                          I'm not advocating a mindset of "It works, meets the requirements fully, so
                          lock it away and it doesn't matter if it has a bad design." In fact, it's a
                          bit the opposite. Once you have the unit tests solidly down, then you know
                          your code meets the requirements, and you know that if something breaks, you
                          will find out about it. So at any time, anyone can come along and change the
                          code to what they think is a better design, faster, etc. So the unit test
                          doesn't test design, but it gives you a safeguard to facilitate design
                          changes.

                          [color=blue]
                          > To their surprise, the breakpoint was never hit and the test completed
                          > successfully. A quick investigation with the debugger showed that a[/color]
                          function[color=blue]
                          > a few steps up the call chain had an optimisation that allowed it[/color]
                          sometimes[color=blue]
                          > to skip unnecessary work. In this case, it skipped the new code. Treating
                          > the code as a black box in this case just didn't work - the developer put[/color]
                          in[color=blue]
                          > some inputs, got correct outputs and entirely missed that fact that his[/color]
                          new[color=blue]
                          > code wasn't being executed at all.[/color]

                          This isn't the way I write unit tests, and as far as I've seen, it's not the
                          way that they're supposed to be written, either. The unit test is supposed
                          to isolate and target specific areas of code. So there should be a test that
                          specifically targets the function in question, ignoring the optimisation. As
                          for the optimisation in the other method - there should be a unit test that
                          specifically targets that as well. Unit tests which test at a higher level
                          are ok too, and probably a good idea to watch the behaviour of the object at
                          a higher level as well, but like I said before, unit testing is under the
                          hood testing.

                          [color=blue]
                          > Here's another example, closer to home. The following code shows a nasty
                          > bug:
                          >
                          > bool AccessGranted = true;
                          >
                          > try
                          > {
                          > // See if we have access to c:\test.txt
                          > new FileStream(@"c: \test.txt",
                          > FileMode.Open,
                          > FileAccess.Read ).Close();
                          > }
                          > catch (SecurityExcept ion x)
                          > {
                          > // access denied
                          > AccessGranted = false;
                          > }
                          > catch (...)
                          > {
                          > // something else happened
                          > }
                          >
                          > If the CLR grants access to the test file in this example, everything[/color]
                          works[color=blue]
                          > fine. If the CLR denies access, everything works fine as well because a
                          > SecurityExcepti on is thrown. But what happens if the discretionary access
                          > control list (DACL) on the file doesn't grant access? Then a different
                          > exception is thrown, but AccessGranted will return true because of the
                          > optimistic assumption made on the first line of code. The bug was really[/color]
                          in[color=blue]
                          > the requirements as well as in the code, because they didn't state what
                          > should happen if the CLR granted access, but the file system didn't.
                          > Stepping through this code would have shown that a completely different
                          > exception was being thrown when the DACL denied access, and therefore[/color]
                          would[color=blue]
                          > have pointed to the omission in requirements as well as finding the bug.[/color]

                          I think this could have been coded better. In general, when you have code
                          that is trying to discover if you can do a certain thing, it should presume
                          the negative in the beginning. Especially for things like "Do I have the
                          security rights to do <xyz>" it should always be false in the beginning.
                          That way you ensure that the only time you ever think you have the right to
                          do the action is when the code that recogises the affirmative is run.

                          I'm not sure if you mean the base Exception type by the "..." in your code.
                          I wouldn't write code that catches just a plain exception unless it was
                          planning to wrap the exception in a more meaningful one and re-throw. If you
                          didn't mean to catch the base Exception type there, then the DACL exception
                          wouldn't be caught, and the unit test would show an error. This is the way
                          NUnit behaves - if an assertion fails, you get a test failure, if an
                          exception escapes from the test, you get a test error.

                          Personally, I would have written the function like:

                          bool AccessGranted = false;
                          try
                          {
                          new FileStream(@"c: \test.txt",
                          FileMode.Open,
                          FileAccess.Read ).Close();
                          AccessGranted = true;
                          }
                          catch (SecurityExcept ion)
                          {
                          }

                          But, of course, an even more powerful tool than unit testing is hindsight...

                          Niall



                          Comment

                          • Alvin Bruney

                            #43
                            Re: c# 2004 (Whidbey)

                            > catch during testing, and forget to remove it. And yet another reason is[color=blue]
                            > that developers sometimes don't realise that you shouldn't catch
                            > System.Exceptio n without rethrowing it. Finally, a few of the .NET base[/color]

                            i totally disagree with that statement. i'm here wondering why you made it.
                            would you care to explain? I can partially see your point but that statement
                            is just too general to let slide.

                            "Mark Pearce" <evil@bay.com > wrote in message
                            news:#h2KCFOaDH A.4020@tk2msftn gp13.phx.gbl...[color=blue]
                            > Hi Niall,
                            >[color=green][color=darkred]
                            > >> What I'm wondering is how you can be sure that your code fully meets[/color][/color][/color]
                            the[color=blue]
                            > requirements if you don't test it against the requirements? <<
                            >
                            > Of course I test against requirements, but for me, that's just the first
                            > step. After that, I start testing to find requirements bugs, design bugs,
                            > implementation bugs, etc.
                            >[color=green][color=darkred]
                            > >> I guess the attitude of the unit test is to give you the most important[/color][/color]
                            > thing - a program which does what the customer wants. At the end of the[/color]
                            day,[color=blue]
                            > the customer doesn't care if you have spaghetti code, as long as the[/color]
                            program[color=blue]
                            > works. <<
                            >
                            > If you're doing custom software development, your customer certainly cares
                            > deeply about the internal quality of the code. The customer's staff will
                            > have to debug, maintain and enhance the code for months and years to come.
                            > If you're doing product development, the end-customers may not care about
                            > the internal quality of the code, but the company for which you're
                            > developing the product certainly cares. Once again, the company will have[/color]
                            to[color=blue]
                            > debug, maintain and enhance the product for a long time.
                            >
                            > So somebody, somewhere, *always* cares about the code's internal quality.
                            >[color=green][color=darkred]
                            > >> So the unit test doesn't test design, but it gives you a safeguard to[/color][/color]
                            > facilitate design changes. <<
                            >
                            > Here at least we agree on something!
                            >[color=green][color=darkred]
                            > >> The unit test is supposed to isolate and target specific areas of code.[/color][/color]
                            > <<
                            >
                            > The developer in question thought that he *was* targeting a specific area[/color]
                            of[color=blue]
                            > code. He didn't know about the optimisation, and indeed had no way of
                            > knowing about the optimisation without stepping through the code in a
                            > source-level debugger.
                            >[color=green][color=darkred]
                            > >> I think this could have been coded better. <<[/color][/color]
                            >
                            > Of course in hindsight, it should have been coded better. But you've just
                            > been arguing that the internal code quality doesn't really matter, as long
                            > as the unit tests are satisfied. You can't have it both ways.
                            >[color=green][color=darkred]
                            > >> I wouldn't write code that catches just a plain exception unless it was[/color][/color]
                            > planning to wrap the exception in a more meaningful one and re-throw. <<
                            >
                            > There are various reasons to catch System.Exceptio n, including the one you
                            > mentioned. Another reason is to reverse a transaction after any exception.
                            > Yet another reason is that many developers often put in a System.Exceptio n
                            > catch during testing, and forget to remove it. And yet another reason is
                            > that developers sometimes don't realise that you shouldn't catch
                            > System.Exceptio n without rethrowing it. Finally, a few of the .NET base
                            > class methods suppress all exceptions for "security" reasons, and just[/color]
                            fail[color=blue]
                            > silently.
                            >
                            > Whatever the reason, you won't find this type of requirements/coding bug
                            > unless you step through the code and find that an unexpected exception[/color]
                            type[color=blue]
                            > was being silently thrown and caught.
                            >
                            > Regards,
                            >
                            > Mark
                            > ----
                            > Author of "Comprehens ive VB .NET Debugging"
                            > http://www.apress.com/book/bookDisplay.html?bID=128
                            >
                            >
                            > "Niall" <asdf@me.com> wrote in message
                            > news:uwNhs7GaDH A.1492@TK2MSFTN GP12.phx.gbl...[color=green]
                            > > This is perhaps the crux of where we differ - your attitude seems to be[/color]
                            > that[color=green]
                            > > your main responsibility is to ensure that the code you're testing meets[/color]
                            > the[color=green]
                            > > stated requirements. In my case, that's the very least of what I'm[/color][/color]
                            looking[color=blue][color=green]
                            > > for - indeed, I am usually surprised if my code has this type of bug, so[/color][/color]
                            I[color=blue][color=green]
                            > > don't tend to spend the majority of my time looking for it.
                            > >
                            > > Instead, the majority of my time is spent looking for omissions or[/color]
                            > mistakes[color=green]
                            > > in the requirements, and for design bugs and implementation mistakes.
                            > > Stepping though my code is one essential technique here, and so is the[/color][/color]
                            use[color=blue][color=green]
                            > > of code reviews. Studies show that each of these techniques finds a
                            > > different set of bugs.[/color]
                            >
                            > What do you mean by implementation mistakes? Do you mean mistakes such[/color]
                            that[color=blue]
                            > the code doesn't do what it's supposed to do, or mistakes such as slower
                            > code, messy code, etc? What I'm wondering is how you can be sure that your
                            > code fully meets the requirements if you don't test it against the
                            > requirements?
                            >
                            > I agree that bad design, slower code isn't really the domain of unit
                            > testing. With the performance thing, if a particular function has to run[/color]
                            in[color=blue]
                            > less than a certain amount of time, you can always write a test that fails
                            > if it takes longer. I guess the attitude of the unit test is to give you[/color]
                            the[color=blue]
                            > most important thing - a program which does what the customer wants. At[/color]
                            the[color=blue]
                            > end of the day, the customer doesn't care if you have spaghetti code, as
                            > long as the program works. On the other hand, if you have a great design,
                            > but your program doesn't do what they want, then they won't be pleased.
                            >
                            > I'm not advocating a mindset of "It works, meets the requirements fully,[/color]
                            so[color=blue]
                            > lock it away and it doesn't matter if it has a bad design." In fact, it's[/color]
                            a[color=blue]
                            > bit the opposite. Once you have the unit tests solidly down, then you know
                            > your code meets the requirements, and you know that if something breaks,[/color]
                            you[color=blue]
                            > will find out about it. So at any time, anyone can come along and change[/color]
                            the[color=blue]
                            > code to what they think is a better design, faster, etc. So the unit test
                            > doesn't test design, but it gives you a safeguard to facilitate design
                            > changes.
                            >
                            >[color=green]
                            > > To their surprise, the breakpoint was never hit and the test completed
                            > > successfully. A quick investigation with the debugger showed that a[/color]
                            > function[color=green]
                            > > a few steps up the call chain had an optimisation that allowed it[/color]
                            > sometimes[color=green]
                            > > to skip unnecessary work. In this case, it skipped the new code.[/color][/color]
                            Treating[color=blue][color=green]
                            > > the code as a black box in this case just didn't work - the developer[/color][/color]
                            put[color=blue]
                            > in[color=green]
                            > > some inputs, got correct outputs and entirely missed that fact that his[/color]
                            > new[color=green]
                            > > code wasn't being executed at all.[/color]
                            >
                            > This isn't the way I write unit tests, and as far as I've seen, it's not[/color]
                            the[color=blue]
                            > way that they're supposed to be written, either. The unit test is supposed
                            > to isolate and target specific areas of code. So there should be a test[/color]
                            that[color=blue]
                            > specifically targets the function in question, ignoring the optimisation.[/color]
                            As[color=blue]
                            > for the optimisation in the other method - there should be a unit test[/color]
                            that[color=blue]
                            > specifically targets that as well. Unit tests which test at a higher level
                            > are ok too, and probably a good idea to watch the behaviour of the object[/color]
                            at[color=blue]
                            > a higher level as well, but like I said before, unit testing is under the
                            > hood testing.
                            >
                            >[color=green]
                            > > Here's another example, closer to home. The following code shows a nasty
                            > > bug:
                            > >
                            > > bool AccessGranted = true;
                            > >
                            > > try
                            > > {
                            > > // See if we have access to c:\test.txt
                            > > new FileStream(@"c: \test.txt",
                            > > FileMode.Open,
                            > > FileAccess.Read ).Close();
                            > > }
                            > > catch (SecurityExcept ion x)
                            > > {
                            > > // access denied
                            > > AccessGranted = false;
                            > > }
                            > > catch (...)
                            > > {
                            > > // something else happened
                            > > }
                            > >
                            > > If the CLR grants access to the test file in this example, everything[/color]
                            > works[color=green]
                            > > fine. If the CLR denies access, everything works fine as well because a
                            > > SecurityExcepti on is thrown. But what happens if the discretionary[/color][/color]
                            access[color=blue][color=green]
                            > > control list (DACL) on the file doesn't grant access? Then a different
                            > > exception is thrown, but AccessGranted will return true because of the
                            > > optimistic assumption made on the first line of code. The bug was really[/color]
                            > in[color=green]
                            > > the requirements as well as in the code, because they didn't state what
                            > > should happen if the CLR granted access, but the file system didn't.
                            > > Stepping through this code would have shown that a completely different
                            > > exception was being thrown when the DACL denied access, and therefore[/color]
                            > would[color=green]
                            > > have pointed to the omission in requirements as well as finding the bug.[/color]
                            >
                            > I think this could have been coded better. In general, when you have code
                            > that is trying to discover if you can do a certain thing, it should[/color]
                            presume[color=blue]
                            > the negative in the beginning. Especially for things like "Do I have the
                            > security rights to do <xyz>" it should always be false in the beginning.
                            > That way you ensure that the only time you ever think you have the right[/color]
                            to[color=blue]
                            > do the action is when the code that recogises the affirmative is run.
                            >
                            > I'm not sure if you mean the base Exception type by the "..." in your[/color]
                            code.[color=blue]
                            > I wouldn't write code that catches just a plain exception unless it was
                            > planning to wrap the exception in a more meaningful one and re-throw. If[/color]
                            you[color=blue]
                            > didn't mean to catch the base Exception type there, then the DACL[/color]
                            exception[color=blue]
                            > wouldn't be caught, and the unit test would show an error. This is the way
                            > NUnit behaves - if an assertion fails, you get a test failure, if an
                            > exception escapes from the test, you get a test error.
                            >
                            > Personally, I would have written the function like:
                            >
                            > bool AccessGranted = false;
                            > try
                            > {
                            > new FileStream(@"c: \test.txt",
                            > FileMode.Open,
                            > FileAccess.Read ).Close();
                            > AccessGranted = true;
                            > }
                            > catch (SecurityExcept ion)
                            > {
                            > }
                            >
                            > But, of course, an even more powerful tool than unit testing is[/color]
                            hindsight...[color=blue]
                            >
                            > Niall
                            >
                            >
                            >[/color]


                            Comment

                            • Mark Pearce

                              #44
                              Re: c# 2004 (Whidbey)

                              Hi Alvin,

                              The current "best practice" for exception management is that you shouldn't
                              catch an exception unless you expected that exception, you understand it,
                              and you're going to deal with it. Instead, you should let exceptions that
                              you don't know how to handle bubble upwards to code that does know how to
                              handle that exception, or until the top-level exception handler is reached.

                              Catching System.Exceptio n (without re-throwing it) is bad because it's
                              stating that you know how to handle *every* type of CLS-compliant exception,
                              even unusual exceptions such as ExecutionEngine Exception,
                              OutOfMemoryExce ption and StackOverflowEx ception. In general, your code won't
                              know how to handle *every* type of exception.

                              Of course, sometimes you need to catch every exception, such as when you
                              need to reverse a transaction or you want to create and throw a more
                              meaningful custom exception. But in each one of these cases, you're
                              re-throwing the System.Exceptio n in some form.

                              Regards,

                              Mark
                              --
                              Author of "Comprehens ive VB .NET Debugging"



                              "Alvin Bruney" <vapordan_spam_ me_not@hotmail_ no_spamhotmail. com> wrote in
                              message news:eEEiikeaDH A.1740@TK2MSFTN GP10.phx.gbl...[color=blue]
                              > catch during testing, and forget to remove it. And yet another reason is
                              > that developers sometimes don't realise that you shouldn't catch
                              > System.Exceptio n without rethrowing it. Finally, a few of the .NET base[/color]

                              i totally disagree with that statement. i'm here wondering why you made it.
                              would you care to explain? I can partially see your point but that statement
                              is just too general to let slide.

                              "Mark Pearce" <evil@bay.com > wrote in message
                              news:#h2KCFOaDH A.4020@tk2msftn gp13.phx.gbl...[color=blue]
                              > Hi Niall,
                              >[color=green][color=darkred]
                              > >> What I'm wondering is how you can be sure that your code fully meets[/color][/color][/color]
                              the[color=blue]
                              > requirements if you don't test it against the requirements? <<
                              >
                              > Of course I test against requirements, but for me, that's just the first
                              > step. After that, I start testing to find requirements bugs, design bugs,
                              > implementation bugs, etc.
                              >[color=green][color=darkred]
                              > >> I guess the attitude of the unit test is to give you the most important[/color][/color]
                              > thing - a program which does what the customer wants. At the end of the[/color]
                              day,[color=blue]
                              > the customer doesn't care if you have spaghetti code, as long as the[/color]
                              program[color=blue]
                              > works. <<
                              >
                              > If you're doing custom software development, your customer certainly cares
                              > deeply about the internal quality of the code. The customer's staff will
                              > have to debug, maintain and enhance the code for months and years to come.
                              > If you're doing product development, the end-customers may not care about
                              > the internal quality of the code, but the company for which you're
                              > developing the product certainly cares. Once again, the company will have[/color]
                              to[color=blue]
                              > debug, maintain and enhance the product for a long time.
                              >
                              > So somebody, somewhere, *always* cares about the code's internal quality.
                              >[color=green][color=darkred]
                              > >> So the unit test doesn't test design, but it gives you a safeguard to[/color][/color]
                              > facilitate design changes. <<
                              >
                              > Here at least we agree on something!
                              >[color=green][color=darkred]
                              > >> The unit test is supposed to isolate and target specific areas of code.[/color][/color]
                              > <<
                              >
                              > The developer in question thought that he *was* targeting a specific area[/color]
                              of[color=blue]
                              > code. He didn't know about the optimisation, and indeed had no way of
                              > knowing about the optimisation without stepping through the code in a
                              > source-level debugger.
                              >[color=green][color=darkred]
                              > >> I think this could have been coded better. <<[/color][/color]
                              >
                              > Of course in hindsight, it should have been coded better. But you've just
                              > been arguing that the internal code quality doesn't really matter, as long
                              > as the unit tests are satisfied. You can't have it both ways.
                              >[color=green][color=darkred]
                              > >> I wouldn't write code that catches just a plain exception unless it was[/color][/color]
                              > planning to wrap the exception in a more meaningful one and re-throw. <<
                              >
                              > There are various reasons to catch System.Exceptio n, including the one you
                              > mentioned. Another reason is to reverse a transaction after any exception.
                              > Yet another reason is that many developers often put in a System.Exceptio n
                              > catch during testing, and forget to remove it. And yet another reason is
                              > that developers sometimes don't realise that you shouldn't catch
                              > System.Exceptio n without rethrowing it. Finally, a few of the .NET base
                              > class methods suppress all exceptions for "security" reasons, and just[/color]
                              fail[color=blue]
                              > silently.
                              >
                              > Whatever the reason, you won't find this type of requirements/coding bug
                              > unless you step through the code and find that an unexpected exception[/color]
                              type[color=blue]
                              > was being silently thrown and caught.
                              >
                              > Regards,
                              >
                              > Mark
                              > ----
                              > Author of "Comprehens ive VB .NET Debugging"
                              > http://www.apress.com/book/bookDisplay.html?bID=128
                              >[/color]



                              Comment

                              • Niall

                                #45
                                Re: c# 2004 (Whidbey)


                                "Mark Pearce" <evil@bay.com > wrote in message
                                news:%23h2KCFOa DHA.4020@tk2msf tngp13.phx.gbl. ..[color=blue]
                                > Hi Niall,
                                > So somebody, somewhere, *always* cares about the code's internal quality.[/color]

                                Of course, but my point was that code that has a nice design, but doesn't
                                work is much harder to sell to the customer than code that could have its
                                design improved, but it actually does the job. I'm not saying that I have a
                                "just get it done, we can make it pretty later" attitude, just that it's
                                more important to make sure the program does what is needed. From my
                                experience, once you get a decent sized system, you can be forever
                                "improving" the design, because the design of the architecture can never
                                really fully facilitate all cases of usage.

                                [color=blue]
                                > The developer in question thought that he *was* targeting a specific area[/color]
                                of[color=blue]
                                > code. He didn't know about the optimisation, and indeed had no way of
                                > knowing about the optimisation without stepping through the code in a
                                > source-level debugger.[/color]

                                Well, presumably the designer was aware of all the code in the method they
                                had written? The test should directly cause the methods being tested to be
                                run. To me, this is what testing in isolation means.

                                [color=blue][color=green][color=darkred]
                                > >> I think this could have been coded better. <<[/color][/color]
                                >
                                > Of course in hindsight, it should have been coded better. But you've just
                                > been arguing that the internal code quality doesn't really matter, as long
                                > as the unit tests are satisfied. You can't have it both ways.[/color]

                                As had been said before, bad practice in coding, unit testing or step
                                through debugging can bring both sides down. If the coder had stepped
                                through the function without causing the other type of exception to be
                                thrown, then no one would know about it. All I'm saying is that better
                                coding of the original method would have allowed the unit test to pick up
                                the fault. Unit testing doesn't ensure perfect coding practices, and neither
                                does stepping through with a debugger...

                                [color=blue]
                                >[color=green][color=darkred]
                                > >> I wouldn't write code that catches just a plain exception unless it was[/color][/color]
                                > planning to wrap the exception in a more meaningful one and re-throw. <<
                                >
                                > There are various reasons to catch System.Exceptio n, including the one you
                                > mentioned. Another reason is to reverse a transaction after any exception.
                                > Yet another reason is that many developers often put in a System.Exceptio n
                                > catch during testing, and forget to remove it. And yet another reason is
                                > that developers sometimes don't realise that you shouldn't catch
                                > System.Exceptio n without rethrowing it. Finally, a few of the .NET base
                                > class methods suppress all exceptions for "security" reasons, and just[/color]
                                fail[color=blue]
                                > silently.
                                >
                                > Whatever the reason, you won't find this type of requirements/coding bug
                                > unless you step through the code and find that an unexpected exception[/color]
                                type[color=blue]
                                > was being silently thrown and caught.[/color]

                                Any exception that escapes the function will cause the unit test to fail, so
                                if the coder was wrapping the exception and rethrowing, the unit test would
                                have caught it. If it was rolling back the transaction and then rethrowing,
                                the unit test would have caught that too. The only case that breaks the unit
                                test is when the exception is swallowed, which is bad practice. You can use
                                code smell type software to pick out this kind of thing. If you rely only on
                                the step through, you rely on the problem situation raising its head during
                                that one case of the step through.

                                Niall


                                Comment

                                Working...