LSEARCH is about optimal time-sharing, given one problem. OOPS is about optimal time-sharing, given an ordered sequence of problems. The basic principles of LSEARCH can be explained in one line: time-share all program tests such that each program gets a constant fraction of the total search time. Those of OOPS require just a few more lines: limit the search to self-delimiting programs forming task-specific prefix codes and freeze successful programs in non-modifiable memory; given a new task, spend a fixed fraction of the total search time on programs starting with the most recently frozen prefix (test only on the new task, never on previous tasks); spend the rest of the time on fresh programs (when looking for a universal solver, test fresh programs on all previous tasks).
OOPS allocates part of the total search time for a new problem to programs that exploit previous solutions in computable ways. If the new problem can be solved faster by copy-editing/invoking previous code than by solving the new problem from scratch, then OOPS will discover this and profit thereof. If not, then at least it will not be significantly slowed down by the previous solutions.
Since OOPS is conceptually simple in principle, why does the paper not end here but has 38 more pages? The answer is: to describe the additional efforts required to make OOPS work on realistic limited computers, as opposed to universal machines.