Cooperative batching
A problem I often find myself having in ML (and other domains) is collecting data from worker threads and doing something with it. Sometimes, all you want is one-way communication... but sometimes you want a response. And sometimes (in the case of ML) that data will get massaged and sent somewhere else for processing, and return back at some unspecified time in the future.
python machine-learning