Some problems are relatively easy to solve, in the sense that the complexity grows at a reasonable (i.e., polynomial) rate, even as the problem instances gets larger. For example, arithmetic is manageable even with larger numbers. The complexity of long multiplication is roughly quadratic in the length of the numbers. Broadly speaking, these easy problems are known as “P”.
Some problems are relatively easy to verify that one has a correct solution, but there might not be any easy to solve. For example, if I ask you to find a set of items of an arbitrary restaurant menu that add up exactly to a certain total, it might be tricky (seeming to essentially require exhaustive search, which grows exponentially with the size of menu) to find a solution, but relatively simple to verify that a solution is correct (by checking the addition). Broadly speaking, these types of problems are known as “NP”.
The set of problems “P” is contained within “NP”, since if a problem is easy to solve, it is also easy to verify that a solution is correct. However, whether “P = NP” (all easy-to-verify problems are also easy to solve) or “P != NP” (some easy-to-verify problems are hard to solve) is the biggest, open problem in computer science, and an important unsolved question for mathematics in general. Many people believe and operate under the assumption that “P != NP”, but a proof has remained elusive.
“NP-complete” are essentially the hardest subset of NP problems, and for which people have demonstrated that an algorithm solving one of those problems could be efficiently converted into solving any other NP problem. “NP-hard” refers to a superset of NP-complete problems that are at least as hard as the NP-complete problems, but could also be even harder (i.e., even verifying the correctness of a solution cannot be done efficiently).
In practice, you might only have small connected chains to deal with in most cases, so optimization by exhaustive search is not prohibitive. However, in principle, one could have contrived positions with a large number of awkwardly clumped stones.
Exhaustive search would work to find an exact solution, but it quickly blows up with exponential complexity as the number of stones grows. My previous post is essentially wondering if there might be some other clever way to do this with much less complexity. This sort of problem seems to be a tiling optimization problem, which have been widely studied, and various related problems have been shown to be NP-complete/hard. Maybe this point compression problem could be directly cast as a previously studied tiling optimization problem. I guess it is a rectangular tiling optimization problem, where each 1x1 rectangle costs 4 (e.g., “[aa]”), while all other rectangles costs 7 (e.g., “[aa:bb]”).