Python Ray library for distributed processing
Finished learning a Jupyter notebook "Scaling Python Processing with Ray" created by Dean Wampler. Notebook can be found on Learning O'Reilly.
I like it (both, Ray and the learning material). Ray allows easy build distributed computations for stateless and stateful computing (actors), as well as it has powerful support for reinforced learning.
Basically you need to know @ray.remote decorator and then call your object (function ar method) with remote method.
Second benefit is how Ray manages chains of tasks. When you call remote Ray returns something like a Future object - id object. You can pass this objects on the next step and Ray will manage everything.
There is idea of detached actors - this kind of Ray actor which won't be shutdown after Ray finishes computational tasks. You can find it by name and schedule execution.
I like it (both, Ray and the learning material). Ray allows easy build distributed computations for stateless and stateful computing (actors), as well as it has powerful support for reinforced learning.
Basically you need to know @ray.remote decorator and then call your object (function ar method) with remote method.
@ray.remote
def do_smt(arg1, arg2):
# Some operation for distributed computing
return arg1, arg2
do_smt.remote(1, 2)
Second benefit is how Ray manages chains of tasks. When you call remote Ray returns something like a Future object - id object. You can pass this objects on the next step and Ray will manage everything.
There is idea of detached actors - this kind of Ray actor which won't be shutdown after Ray finishes computational tasks. You can find it by name and schedule execution.
detached = SomeAnotatedAsRayActorClass.options(name="SomeServiceActor").remote()
# Some time later
detached2 = ray.get_actor("SomeServiceActor")
ray.get(detached2.do_smt.remote(arg1, arg2))
Comments
Post a Comment