- Distributed Constraint Optimization Problems (DCOPs) are an elegant model for representing and solving many realistic combinatorial problems that are distributed by nature. DCOPs are NP-hard and therefore many recent studies consider incomplete algorithms for solving them. Distributed local search algorithms, in which agents in the system hold value assignments to their variables and iteratively make decisions on whether to replace them, can be used for solving DCOPs. However, because of the differences between the global evaluation of a system's state and the private evaluation of states by agents, agents are unaware of the global best state that is explored by the algorithm. Previous attempts to use local search algorithms for solving DCOPs reported the state held by the system at the termination of the algorithm, which was not necessarily the (global) best state explored. A general framework that enhances distributed local search algorithms for DCOPs with the anytime property is proposed. The proposed framework makes use of a BFS-tree in order to accumulate the costs of the system's state during the algorithm's iterative performance and to propagate the detection of a new best state when it is found. The proposed framework does not require additional network load. Agents are required to hold a small (linear) additional space (beside the requirements of the algorithm in use). We further propose a set of increased exploration heuristics that exploit the proposed anytime framework. These exploration methods implement different approaches towards exploration. Our empirical study considers various scenarios including random, realistic, and structured problems. It reveals the advantage of the use of the proposed heuristics in the anytime framework over state-of-the-art local search algorithms.