#Spinnaker on #OpenShift

To be able to deploy Spinnaker onto OpenShift cluster there is a need to make restrictions slightly softer:

oc -n spinnaker edit scc restricted

Then modify next part, the main thing is runAsUser to change type to RunAsAny.

  - KILL
    type: RunAsAny
    type: MustRunAs

#OpenShift #oc #cluster up with external accessible URL

Set up cluster:

oc cluster up --routing-suffix=okd --base-dir=_cluster_config_dir --public-hostname=master.okd

Tear down cluster:

oc cluster down

Change cluster settings:

vim _cluster_config_dir/kube-apiserver/admin.kubeconfig
apiVersion: v1
- cluster:
    certificate-authority-data: LS0t
    server: https://master.okd:8443
  name: 127-0-0-1:8443
- cluster:
    certificate-authority-data: LS0tL
    server: https://master.okd:8443
  name: master-okd:8443


 vim _cluster_config_dir/kube-apiserver/master-config.yaml
masterPublicURL: https://master.okd:8443
  alwaysShowProviderSelection: false
  assetPublicURL: https://master.okd:8443/console/
  masterCA: ca-bundle.crt
  masterPublicURL: https://master.okd:8443
  masterURL: https://master.okd:8443

Start up cluster again:

oc cluster up --routing-suffix=okd --base-dir=_cluster_config_dir --public-hostname=master.okd
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Creating shared mount directory on the remote host ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
The server is accessible via web console at:
You are logged in as:
    User:     developer
    Password: <any value>
To login as administrator:
    oc login -u system:admin

#Docker with #Nvidia runtime for #Deep #Learning

Just recently got to know that it’s possible to leverage CUDA and cuDNN inside Docker.

Basically what is needed is corresponding Docker image that can be found here  installation of Nvidia Docker runtime and to configure Docker daemon to use nvidia as a default runtime in /etc/docker/daemon.json. as it was described here and bellow there is an example of my config:

      "default-runtime": "nvidia",
      "runtimes": {
          "nvidia": {
              "path": "/usr/bin/nvidia-container-runtime",
              "runtimeArgs": []

#Python to deal with HTTP with cookies

Recently needed to create a script to make HTTP calls along with handling of cookies, here is a code:

import urllib.request as req
from urllib.error import HTTPError
import http.cookiejar as cookie
cookie_jar = cookie.CookieJar()
opener = req.build_opener(req.HTTPCookieProcessor(cookie_jar))
cookie_request = req.Request('url', method='POST')
resp = req.urlopen(cookie_request)