Posts
Showing posts from 2020
Python Ray library for distributed processing
- Get link
- X
- Other Apps
Finished learning a Jupyter notebook "Scaling Python Processing with Ray" created by Dean Wampler. Notebook can be found on Learning O'Reilly . I like it (both, Ray and the learning material). Ray allows easy build distributed computations for stateless and stateful computing (actors), as well as it has powerful support for reinforced learning. Basically you need to know @ray.remote decorator and then call your object (function ar method) with remote method. @ray.remote def do_smt(arg1, arg2): # Some operation for distributed computing return arg1, arg2 do_smt.remote(1, 2) Second benefit is how Ray manages chains of tasks. When you call remote Ray returns something like a Future object - id object. You can pass this objects on the next step and Ray will manage everything. There is idea of detached actors - this kind of Ray actor which won't be shutdown after Ray finishes computational tasks. You can find it by name and schedule execution. ...
Interesting pages about Pypy Sandbox
- Get link
- X
- Other Apps
https://stackoverflow.com/questions/25706978/how-to-create-socket-in-pypy-sandbox https://stackoverflow.com/questions/13769346/interacting-with-a-sandboxed-pypy-interpreter-secondary-communicate-returns https://github.com/AdityaOli/Building-and-Breaking-a-Python-Sandbox/blob/master/OS%20Level%20Sandboxing%20using%20PyPy's%20Sandbox.md https://www.pypy.org/download.html https://www.programiz.com/python-programming/methods/built-in/exec https://stackoverflow.com/questions/3068139/how-can-i-sandbox-python-in-pure-python CodeJail
Jessica McKellar: Building and breaking a Python sandbox - PyCon 2014
- Get link
- X
- Other Apps
Kafka cluster installation. Only practical experience.
- Get link
- X
- Other Apps
Disclaimer: The story below is related to OpenShift Dedicated v.3. I have a project which requires event sourcing architecture. The project is running in OpenShift. My idea is to try Kafka and RabbitMQ in the broker role. Kafka installation is supported in OpenShift by Strimzi implementation called AMQ Streams . This means that, in case of OpenShift Dedicated clusters, RedHat has to enable Streams support if you are on v.3. After the Streams are enabled you can create a Kafka cluster with a simple command, like it is described in How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams . Just execute: $ cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1alpha1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: external: type: route storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF And you will get you cluster.....
How to execute `kafka-topic.sh` using a broker pod in OpenShift
- Get link
- X
- Other Apps
RedHat advices to use the next command: oc exec -it <broker_pod_name: e.g.="" my-cluster-kafka-0=""> -c kafka -- bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic <topic_name> RedHat portal Other interesting topics: https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/using_amq_streams_on_openshift/uninstalling-str https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/using_amq_streams_on_openshift/assembly-deployment-configuration-str#assembly-storage-deployment-configuration-kafka
Building Streaming Microservices with Apache Kafka - Tim Berglund
- Get link
- X
- Other Apps
«Советская колбаса» или Ложь о вкусных продуктах в СССР
- Get link
- X
- Other Apps
Federated Learning: Machine Learning on Decentralized Data (Google I/O'19)
- Get link
- X
- Other Apps
Jenkins Kubernetes plugin interesting feature
- Get link
- X
- Other Apps
Kubernetes plugin for Jenkins give us interesting ability to run multiple slaves as pods in the same time and execute commands on every of running slaves, moreover it can convert usual container into a slave. All you need is to define template like this: podContainers: [ containerTemplate( name: 'jnlp', image: "${dockerRegistry}/cd/jenkins-slave-python", workingDir: '/tmp', alwaysPullImage: true, resourceRequestMemory: '1Gi', resourceLimitMemory: '2Gi', args: '${computer.jnlpmac} ${computer.name}' ), containerTemplate( name: 'python', image: 'python:3.8-slim', alwaysPullImage: true, ttyEnabled: true, resourceRequestMemory: '1Gi', resourceLimitMemory: '2Gi', command: '', workingDir: '/tmp' ) ], Please, pay attention to the second template definition. It has, from the first look,...
Run Horovod with CPU only
- Get link
- X
- Other Apps
I was interested in running federated model training using Horovod in the Kubernetes cluster. The first question I had was - is it possible to run Horovod on CPU, because our containers don't have GPUs. I found out on GIT repository of Horovod , that this is possible. I have to install within Horovod non-gpu version of Tensorflow, and everything should fly. Also no need to specify GPU settings in the code, like config.gpu_options.allow_growth = True config.gpu_options.visible_device_list = str(hvd.local_rank()) Also, Fardin, advices: If you have GPUs on your machine and do not want to use them set the environment variable CUDA_VISIBLE_DEVICES=-1 .
Categorical Crossentropy vs. Sparse Categorical Crossentropy
- Get link
- X
- Other Apps