grep text file line by line and grep a directory for a match of each line

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • kronus
    New Member
    • May 2008
    • 16

    grep text file line by line and grep a directory for a match of each line

    Hi everyone,

    This is my first time posting to the UNIX form and it might be a strange request, but I believe I have most of the pieces.

    Here's the overall goal -- I am trying to find the links in a large web site that are linked to files over 2000k.

    I used the command line to find all files on the server that are larger than 2000k by using the following:

    Code:
    find ./ -size +2000c > files_over_2000_bytes.txt
    And I know how to grep all the files in the web site to find a certain word by using the following:

    Code:
    grep -nr Something * > Something.txt;
    Since I haven't used awk except for way back in '98, I am at a loss for how I could place these two commands together.

    In any type of language that I am familiar with, I would make a function out of the first line.

    Then I would create another function to read each line within the text file and place it into a variable.

    Then I would grep the web site using the variable.

    So, I have the logic, but I do not know the syntax.

    Can anyone help me?

    Thanx n advance
  • gpraghuram
    Recognized Expert Top Contributor
    • Mar 2007
    • 1275

    #2
    You want to embedd the whole logic in a AWK program? or whats ur requirement?

    raghu

    Comment

    • kronus
      New Member
      • May 2008
      • 16

      #3
      This is what I came up with:

      Code:
      #!/bin/bash
      
      while read LINE
      
      do
      
      grep –nr $LINE * > matched_line.txt
      
      done < files_over_2000_bytes.txt

      Comment

      • ghostdog74
        Recognized Expert Contributor
        • Apr 2006
        • 511

        #4
        use the -f option of grep, eg
        Code:
        grep -f file1 file2

        Comment

        • ghostdog74
          Recognized Expert Contributor
          • Apr 2006
          • 511

          #5
          use the -f option of grep, eg
          Code:
          grep -f file1 file2

          Comment

          • kronus
            New Member
            • May 2008
            • 16

            #6
            I don't know if I understand your reply, because I am trying to read each individual line of file1 to grep the entire web site.

            So are you saying that I should write:
            Code:
            #!/bin/bash
            
            grep –f files_over_2000_bytes.txt * > matched_line.txt

            Comment

            • kronus
              New Member
              • May 2008
              • 16

              #7
              I have started looking at my own post and I started to think that maybe this wasn't making much sense to anyone other than myself.

              Say I have a file with the following:
              Code:
              /var/www/vhosts/something.com/images/1.jpg
              /var/www/vhosts/something.com/images/2.jpg
              /var/www/vhosts/something.com/images/3.jpg
              /var/www/vhosts/something.com/images/4.jpg
              /var/www/vhosts/something.com/images/4.jpg
              Let's call this file "files_over_200 0k.txt"

              What I would like to do is grep the entire web site using each line from "files_over_200 0k.txt" to find any links within any page for "1.jpg," "2.jpg," etc... and place the results of where they are located, within another text file called "matched_line.t xt"

              Comment

              Working...