perl hash doubt

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • lilly07
    New Member
    • Jul 2008
    • 89

    perl hash doubt

    Hi,
    I have two text file as below:

    file1.txt (4 columns)
    Code:
    test1 1000 2000 +
    test2 1000 2000 -
    test1 1000 2000 +
    test3 1000 2000 +
    test1 1000 2000 -
    test2 2000 3000 +
    test1 1000 2000 +
    test1 1000 3000 -
    file2.txt also contains very similar data.

    The first step in the processing is that I want to collect all the data. Hence I want to use column1 as key and column2,3 and 4 as values.

    For all test1, all other three columns will be added after removing duplicates. if two lines as below are available, then only one value should be added.
    Code:
    test1 1000 2000 -
    test1 1000 2000 -
    key = {test1} and value = {1000 2000 -}
    Since I have always used perl hash with two columns, the first column as key and second into hash and at the same time adding them in perl hash helped me to get rid of duplicate values of the second column, I want to know how that can be applicable for the above dataset. column 1 as key and column 2,3and4 as values (basically to remove duplicates) Can I cat them into one string with some pattern as
    "1000:2000:-". Please let me know.

    My second query would be how to compare perl hash1 (data from file1) and hash2(data from file2).

    For eaxmple for test1 (every key), I have to compare the values between two hashes.

    Please let me know as I am not familiar with perl hash of hash. Do I have to use that?

    My basic motivation is to remove duplicates from two files and then compare two hashes to find how many of the column2:column2 :column3 are present in both files as well the ones that are unique to each data set. Or any other way to handle the data?

    And it is highly confusing. A small example woule be easy for me to proceed.

    Thanks in advance.
  • toolic
    Recognized Expert New Member
    • Sep 2009
    • 70

    #2
    I do not understand what you are trying to accomplish.

    If you also post a small example of your 2nd file, along
    with the exact output you are looking for, I might be able to
    create some example code for you.

    Hash-of-hashes data structure are very useful
    and may be appropriate for this task.

    Comment

    • lilly07
      New Member
      • Jul 2008
      • 89

      #3
      Hi Toolic thx for your response. my file2.txt also contains the data in the same format as file1.txt

      Code:
      test1 1000 4000 + 
      test3 1000 2000 - 
      test1 1000 2000 + 
      test5 5000 7000 + 
      test1 1000 2000 - 
      test2 2000 4000 + 
      test3 1000 6000 + 
      test1 1000 3000 -
      As I mentioned earlier, I need to collect them as test1 (column1) and column2, 3 and 4 has to be collected as values after removing the duplicates. For example
      iif there are two records as below:
      Code:
      test1 1000 2000 - 
      test1 1000 2000 -
      Then only one pair should be added into hash as test1 {key} and "1000 2000 - " as {value}. This resolves records removing the duplicates of cloumn2,3 and 4.

      And the next step would be comparing two files.

      As each file1 and file2 data are collected in two different hashes, I want to compare for every (key) in file2 (ie test1), I want to compare each value in file2 hash exists in file2(same key (ie test1) whether value exists or not. Same key between two hashes are selected and values among both hashes are to be compared. Is it feasible?
      I know this is highly confusing. Sorry and thanks again.

      Regards

      Comment

      • nithinpes
        Recognized Expert Contributor
        • Dec 2007
        • 410

        #4
        The approach would be to remove duplicate lines first.
        Then split each unique line on spaces into an array and then create a hash of arrays. The first column in the file will be made hash key and the remaining values will be pushed into an array which is value for the key. Whenever you come across a key that already exists, append rest of the elements into the existing value of key.
        I am giving only the approach since you haven't posted the code that you tried. If you face any issues, post the code that you tried so that we can correct or modify the code.

        Comment

        • lilly07
          New Member
          • Jul 2008
          • 89

          #5
          Yes. Thank you so much for your response.

          I managed to remove duplicates as below. A very simple way.

          Code:
           %seen = ();
          while(<>){
                $seen{$_}++;
                next if $seen{$_} > 1;
                print "$_";
             }
          As I don't know how to compare between two hashes, I couldn't proceed further. I will definitely follow pushing the data in to perl key and array as you suggested. Thanks.

          Comment

          Working...