Yesterday, I added support for Python’s multiprocessing module to pyHarmonySearch. I use
multiprocessing to run multiple harmony searches simultaneously. Since harmony search is stochastic, different results may be returned on each run. Some runs will be luckier than others, so I figured it makes sense to take advantage of multiple cores by running multiple search iterations. The resulting solution is the best one found across all iterations.
I don’t have a rigorous proof demonstrating this, but I’ve seen better results in the test objective functions I’ve included on GitHub. On machines with many cores, better results could come at a minimal extra wall time cost. Furthermore, instead of running a single HS instance with a very large number of improvisations, it may be beneficial to simultaneously run multiple HS instances, each with a smaller number of improvisations. This is all conjecture, though, and it likely depends on the objective function being studied.