Posts

Showing posts from 2020

Breakout Bot Game Diaries - Season Four - Episode 5: Regex

Image

Komunizm victims

 If you have relatives disappeared in komunistic millstones, and you are looking for information about them - you can visit the bessmertnybarak . Here is gathered information about thousands of victims.

Колесо емоцій Роберта Плутчика

Image
Ви можете завантажити малюнок у великій роздільній здатності з Google Drive Чудовий сайт з інтерактивним колесом емоцій.

Flask-SQLAlchemy multiple models in separate files

Realy good commant I've found on StackOverflow

Python Ray library for distributed processing

Finished learning a Jupyter notebook "Scaling Python Processing with Ray" created by Dean Wampler. Notebook can be found on Learning O'Reilly . I like it (both, Ray and the learning material). Ray allows easy build distributed computations for stateless and stateful computing (actors), as well as it has powerful support for reinforced learning. Basically you need to know @ray.remote decorator and then call your object (function ar method) with remote method. @ray.remote def do_smt(arg1, arg2): # Some operation for distributed computing return arg1, arg2 do_smt.remote(1, 2) Second benefit is how Ray manages chains of tasks. When you call remote  Ray returns something like a Future object - id object. You can pass this objects on the next step and Ray will manage everything. There is idea of detached actors - this kind of Ray actor which won't be shutdown after Ray finishes computational tasks. You can find it by name and schedule execution. ...

Python data class to JSON

Super cool module which provides Python data class to JSON or Python dict conversion -  dataclasses-json

Spleeter from Deezer

Super cool project for audio processing  https://github.com/deezer/spleeter

Helpful tools to develop a site

Materialized PythonAnywhere

Interesting pages about Pypy Sandbox

https://stackoverflow.com/questions/25706978/how-to-create-socket-in-pypy-sandbox https://stackoverflow.com/questions/13769346/interacting-with-a-sandboxed-pypy-interpreter-secondary-communicate-returns https://github.com/AdityaOli/Building-and-Breaking-a-Python-Sandbox/blob/master/OS%20Level%20Sandboxing%20using%20PyPy's%20Sandbox.md https://www.pypy.org/download.html https://www.programiz.com/python-programming/methods/built-in/exec https://stackoverflow.com/questions/3068139/how-can-i-sandbox-python-in-pure-python CodeJail

Jessica McKellar: Building and breaking a Python sandbox - PyCon 2014

Jessica McKellar: Building and breaking a Python sandbox - PyCon 2014

Really cool Federated Learning tutorial.

Please check this link  Introduction to Federated Learning

Встановлення Kafka кластера в OpenShift ч.2

Image

Встановлення Kafka кластера в OpenShift ч.1

Image

Kafka cluster installation. Only practical experience.

Disclaimer: The story below is related to OpenShift Dedicated v.3. I have a project which requires event sourcing architecture. The project is running in OpenShift. My idea is to try Kafka and RabbitMQ in the broker role. Kafka installation is supported in OpenShift by Strimzi implementation called AMQ Streams . This means that, in case of OpenShift Dedicated clusters, RedHat has to enable Streams support if you are on v.3. After the Streams are enabled you can create a Kafka cluster with a simple command, like it is described in How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams . Just execute: $ cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1alpha1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: external: type: route storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF And you will get you cluster.....

How to execute `kafka-topic.sh` using a broker pod in OpenShift

RedHat advices to use the next command: oc exec -it <broker_pod_name: e.g.="" my-cluster-kafka-0=""> -c kafka -- bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic <topic_name> RedHat portal Other interesting topics: https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/using_amq_streams_on_openshift/uninstalling-str https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/using_amq_streams_on_openshift/assembly-deployment-configuration-str#assembly-storage-deployment-configuration-kafka

Deploy RabbitMQ cluster in OpenShift

https://github.com/jharting/openshift-rabbitmq-cluster

Deploy Kafka in OpenShift

I guess the most valid links are:  Finite Loop Git Repo  and Kunal Repo , even previous one relates on last one. But the best way is Strimzi

Building Streaming Microservices with Apache Kafka - Tim Berglund

Image

«Советская колбаса» или Ложь о вкусных продуктах в СССР

«Советская колбаса» или Ложь о вкусных продуктах в СССР : Источник

Federated Learning: Machine Learning on Decentralized Data (Google I/O'19)

Image

Jenkins Kubernetes plugin interesting feature

Kubernetes plugin for Jenkins give us interesting ability to run multiple slaves as pods in the same time and execute commands on every of running slaves, moreover it can convert usual container into a slave. All you need is to define template like this: podContainers: [ containerTemplate( name: 'jnlp', image: "${dockerRegistry}/cd/jenkins-slave-python", workingDir: '/tmp', alwaysPullImage: true, resourceRequestMemory: '1Gi', resourceLimitMemory: '2Gi', args: '${computer.jnlpmac} ${computer.name}' ), containerTemplate( name: 'python', image: 'python:3.8-slim', alwaysPullImage: true, ttyEnabled: true, resourceRequestMemory: '1Gi', resourceLimitMemory: '2Gi', command: '', workingDir: '/tmp' ) ], Please, pay attention to the second template definition. It has, from the first look,...

Run Horovod with CPU only

I was interested in running federated model training using Horovod in the Kubernetes cluster. The first question I had was - is it possible to run Horovod on CPU, because our containers don't have GPUs. I found out on GIT repository of Horovod  , that this is possible. I have to install within Horovod non-gpu version of Tensorflow, and everything should fly. Also no need to specify GPU settings in the code, like config.gpu_options.allow_growth = True config.gpu_options.visible_device_list = str(hvd.local_rank()) Also, Fardin, advices: If you have GPUs on your machine and do not want to use them set the environment variable  CUDA_VISIBLE_DEVICES=-1 .

Categorical Crossentropy vs. Sparse Categorical Crossentropy

I found nice explanation here:  https://jovianlin.io/cat-crossentropy-vs-sparse-cat-crossentropy/