Modules

  • ABCDE
  • FGHIL
  • MNOPS
  • TUX

Tools

study

Perl 5 version 12.2 documentation
Recently read

study

  • study SCALAR

  • study

    Takes extra time to study SCALAR ($_ if unspecified) in anticipation of doing many pattern matches on the string before it is next modified. This may or may not save time, depending on the nature and number of patterns you are searching on, and on the distribution of character frequencies in the string to be searched; you probably want to compare run times with and without it to see which runs faster. Those loops that scan for many short constant strings (including the constant parts of more complex patterns) will benefit most. You may have only one study active at a time: if you study a different scalar the first is "unstudied". (The way study works is this: a linked list of every character in the string to be searched is made, so we know, for example, where all the 'k' characters are. From each search string, the rarest character is selected, based on some static frequency tables constructed from some C programs and English text. Only those places that contain this "rarest" character are examined.)

    For example, here is a loop that inserts index producing entries before any line containing a certain pattern:

    1. while (<>) {
    2. study;
    3. print ".IX foo\n" if /\bfoo\b/;
    4. print ".IX bar\n" if /\bbar\b/;
    5. print ".IX blurfl\n" if /\bblurfl\b/;
    6. # ...
    7. print;
    8. }

    In searching for /\bfoo\b/ , only locations in $_ that contain f will be looked at, because f is rarer than o . In general, this is a big win except in pathological cases. The only question is whether it saves you more time than it took to build the linked list in the first place.

    Note that if you have to look for strings that you don't know till runtime, you can build an entire loop as a string and eval that to avoid recompiling all your patterns all the time. Together with undefining $/ to input entire files as one record, this can be quite fast, often faster than specialized programs like fgrep(1). The following scans a list of files (@files ) for a list of words (@words ), and prints out the names of those files that contain a match:

    1. $search = 'while (<>) { study;';
    2. foreach $word (@words) {
    3. $search .= "++\$seen{\$ARGV} if /\\b$word\\b/;\n";
    4. }
    5. $search .= "}";
    6. @ARGV = @files;
    7. undef $/;
    8. eval $search; # this screams
    9. $/ = "\n"; # put back to normal input delimiter
    10. foreach $file (sort keys(%seen)) {
    11. print $file, "\n";
    12. }