// Copyright (c) 2011 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "base/memory/raw_ptr.h" #include "courgette/adjustment_method.h" #include #include #include #include #include #include #include #include #include #include "base/format_macros.h" #include "base/logging.h" #include "base/strings/stringprintf.h" #include "base/time/time.h" #include "courgette/assembly_program.h" #include "courgette/courgette.h" #include "courgette/encoded_program.h" /* Shingle weighting matching. We have a sequence S1 of symbols from alphabet A1={A,B,C,...} called the 'model' and a second sequence of S2 of symbols from alphabet A2={U,V,W,....} called the 'program'. Each symbol in A1 has a unique numerical name or index. We can transcribe the sequence S1 to a sequence T1 of indexes of the symbols. We wish to assign indexes to the symbols in A2 so that when we transcribe S2 into T2, T2 has long subsequences that occur in T1. This will ensure that the sequence T1;T2 compresses to be only slightly larger than the compressed T1. The algorithm for matching members of S2 with members of S1 is eager - it makes matches without backtracking, until no more matches can be made. Each variable (symbol) U,V,... in A2 has a set of candidates from A1, each candidate with a weight summarizing the evidence for the match. We keep a VariableQueue of U,V,... sorted by how much the evidence for the best choice outweighs the evidence for the second choice, i.e. prioritized by how 'clear cut' the best assignment is. We pick the variable with the most clear-cut candidate, make the assignment, adjust the evidence and repeat. What has not been described so far is how the evidence is gathered and maintained. We are working under the assumption that S1 and S2 are largely similar. (A different assumption might be that S1 and S2 are dissimilar except for many long subsequences.) A naive algorithm would consider all pairs (A,U) and for each pair assess the benefit, or score, the assignment U:=A. The score might count the number of occurrences of U in S2 which appear in similar contexts to A in S1. To distinguish contexts we view S1 and S2 as a sequence of overlapping k-length substrings or 'shingles'. Two shingles are compatible if the symbols in one shingle could be matched with the symbols in the other symbol. For example, ABC is *not* compatible with UVU because it would require conflicting matches A=U and C=U. ABC is compatible with UVW, UWV, WUV, VUW etc. We can't tell which until we make an assignment - the compatible shingles form an equivalence class. After assigning U:=A then only UVW and UWV (equivalently AVW, AWV) are compatible. As we make assignments the number of equivalence classes of shingles increases and the number of members of each equivalence class decreases. The compatibility test becomes more restrictive. We gather evidence for the potential assignment U:=A by counting how many shingles containing U are compatible with shingles containing A. Thus symbols occurring a large number of times in compatible contexts will be assigned first. Finding the 'most clear-cut' assignment by considering all pairs symbols and for each pair comparing the contexts of each pair of occurrences of the symbols is computationally infeasible. We get the job done in a reasonable time by approaching it 'backwards' and making incremental changes as we make assignments. First the shingles are partitioned according to compatibility. In S1=ABCDD and S2=UVWXX we have a total of 6 shingles, each occuring once. (ABC:1 BCD:1 CDD:1; UVW:1 VWX: WXX:1) all fit the pattern or the pattern . The first pattern indicates that each position matches a different symbol, the second pattern indicates that the second symbol is repeated. pattern S1 members S2 members : {ABC:1, BCD:1}; {UVW:1, VWX:1} : {CDD:1} {WXX:1} The second pattern appears to have a unique assignment but we don't make the assignment on such scant evidence. If S1 and S2 do not match exactly, there will be numerous spurious low-score matches like this. Instead we must see what assignments are indicated by considering all of the evidence. First pattern has 2 x 2 = 4 shingle pairs. For each pair we count the number of symbol assignments. For ABC:a * UVW:b accumulate min(a,b) to each of {U:=A, V:=B, W:=C}. After accumulating over all 2 x 2 pairs: U: {A:1 B:1} V: {A:1 B:2 C:1} W: {B:1 C:2 D:1 } X: {C:1 D:1} The second pattern contributes: W: {C:1} X: {D:2} Sum: U: {A:1 B:1} V: {A:1 B:2 C:1} W: {B:1 C:3 D:1} X: {C:1 D:3} From this we decide to assign X:=D (because this assignment has both the largest difference above the next candidate (X:=C) and this is also the largest proportionately over the sum of alternatives). Lets assume D has numerical 'name' 77. The assignment X:=D sets X to 77 too. Next we repartition all the shingles containing X or D: pattern S1 members S2 members : {ABC:1}; {UVW:1} : {BCD:1}; {VWX:1} : {CDD:1} {WXX:1} As we repartition, we recalculate the contributions to the scores: U: {A:1} V: {B:2} W: {C:3} All the remaining assignments are now fixed. There is one step in the incremental algorithm that is still infeasibly expensive: the contributions due to the cross product of large equivalence classes. We settle for making an approximation by computing the contribution of the cross product of only the most common shingles. The hope is that the noise from the long tail of uncounted shingles is well below the scores being used to pick assignments. The second hope is that as assignment are made, the large equivalence class will be partitioned into smaller equivalence classes, reducing the noise over time. In the code below the shingles are bigger (Shingle::kWidth = 5). Class ShinglePattern holds the data for one pattern. There is an optimization for this case: : {CDD:1} {WXX:1} Above we said that we don't make an assignment on this "scant evidence". There is an exception: if there is only one variable unassigned (more like the pattern) AND there are no occurrences of C and W other than those counted in this pattern, then there is no competing evidence and we go ahead with the assignment immediately. This produces slightly better results because these cases tend to be low-scoring and susceptible to small mistakes made in low-scoring assignments in the approximation for large equivalence classes. */ namespace courgette { namespace adjustment_method_2 { //////////////////////////////////////////////////////////////////////////////// class AssignmentCandidates; class LabelInfoMaker; class Shingle; class ShinglePattern; // The purpose of adjustment is to assign indexes to Labels of a program 'p' to // make the sequence of indexes similar to a 'model' program 'm'. Labels // themselves don't have enough information to do this job, so we work with a // LabelInfo surrogate for each label. // class LabelInfo { public: // Just a no-argument constructor and copy constructor. Actual LabelInfo // objects are allocated in std::pair structs in a std::map. LabelInfo() : label_(nullptr), is_model_(false), debug_index_(0), refs_(0), assignment_(nullptr), candidates_(nullptr) {} ~LabelInfo(); AssignmentCandidates* candidates(); raw_ptr