Show me the code! – By Davanum Srinivas

August 11, 2016

Kubernetes Dev Bootstrap Resources

Filed under: Uncategorized — Davanum Srinivas @ 3:54 pm

Kubernetes development is centered around Github workflows. So the links for Issues, Pull Requests and Commits are as follows:

Starting with 1.4, Kubernetes dev folks are using another repository to track Features:

Must-read to get the lay of the land:

Kubernetes is organized around Special Interest Groups (SIG’s):

How do i follow my work?  https://k8s-gubernator.appspot.com/pr/dims

What’s currently running in the CI systems? http://kubernetes.submit-queue.k8s.io/#/queue

What went wrong in the last 24 hours? Kubernetes 24-Hour Test Report

Where can i find performance numbers, flaky tests on various platforms etc?

Where do folks hangout? https://github.com/kubernetes/community/blob/master/README.md#slack-chat

Hope this helps!

April 21, 2016

Quick start Kubernetes in an OpenStack Environment

Filed under: Uncategorized — Davanum Srinivas @ 8:07 am

Here’s a quick way for those who want to try Kubernetes in an existing OpenStack environment:

  • Deploy a VM using an image that has Docker built in. We pick one used by Magnum in the OpenStack CI environment
  • Use the cloud-init functionality during Nova boot process to start Kubernetes

Here’s the script we inject during Nova boot process:

#!/bin/sh

# Switch off SE-Linux
setenforce 0

# Set the name server
sudo sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'

# Get the latest stable version of kubernetes
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
echo "K8S_VERSION : ${K8S_VERSION}"

echo "Starting docker service"
sudo systemctl enable docker.service
sudo systemctl start docker.service --ignore-dependencies
echo "Checking docker service"
sudo docker ps

# Run the docker containers for kubernetes
echo "Starting Kubernetes containers"
sudo docker run \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:ro \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged=true \
    --name=kubelet \
    -d \
    gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \
    /hyperkube kubelet \
        --containerized \
        --hostname-override="127.0.0.1" \
        --address="0.0.0.0" \
        --api-servers=http://localhost:8080 \
        --config=/etc/kubernetes/manifests \
        --allow-privileged=true --v=2

And here’s how you upload the image from Magnum into glance and then use nova boot to start it up (using the nova and glance python clients).

#!/bin/sh

export OS_REGION_NAME=RegionOne
export OS_PASSWORD=xyz123
export OS_AUTH_URL=http://172.18.184.20:5000/v2.0
export OS_USERNAME=dsrinivas
export OS_TENANT_NAME=Commons

curl -o fedora-atomic-latest.qcow2 \
    https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2

glance image-create --name "fedora-23-atomic" \
    --disk-format "qcow2" \
    --container-format=bare \
    --file fedora-atomic-latest.qcow2

nova boot \
    --key-name "k8s-keypair" \
    --flavor "m1.medium" \
    --image "fedora-23-atomic" \
    --user-data kube-init.sh \
    --config-drive true \
    "my-k8s"

Resources:

April 8, 2016

New to OpenStack Reviews – Start here!

Filed under: Uncategorized — Davanum Srinivas @ 4:15 pm

Watch this video:

Why should i do a review?

What do i look for when doing a peer review:

Also please take a look at tips here:

Find the Gerrit Dashboard for your project:

Watch this Youtube video from Austin Summit:

Join #openstack-dev IRC Channel on Freenode:

Happy Reviewing!

February 6, 2015

Deploy a Centos container in Docker for running OpenStack Nova tests

Filed under: Uncategorized — Davanum Srinivas @ 1:26 pm

First start a centos container using docker command line and drop down to the bash shell

dims@dims-ubuntu:~$ sudo docker run -i -t centos /bin/bash
[root@218b625b2529 nova]# rpm -q centos-release
centos-release-7-0.1406.el7.centos.2.5.x86_64

Install the EPEL repository

[root@218b625b2529 /]# yum -y install epel-release

Install a few things needed before we can run Nova tests as documented here:

[root@218b625b2529 /]# yum -y install git python-devel openssl-devel python-pip git gcc libxslt-devel mysql-devel postgresql-devel libffi-devel libvirt-devel graphviz sqlite-devel

We need tox as well.

[root@218b625b2529 /]# pip install tox

Get the latest nova trunk

[root@218b625b2529 /]# git clone https://git.openstack.org/openstack/nova
[root@218b625b2529 /]# cd nova
[root@218b625b2529 nova]#

Run the tests as usual

[root@218b625b2529 nova]# tox -e py27 nova.tests.unit.test_crypto

January 13, 2015

Quickly running a single OpenStack Nova test

Filed under: Nova, openstack — Davanum Srinivas @ 10:41 am

Here’s how we usually run a single test

dims@dims-mac:~/openstack/nova$ time tox -e py27 nova.tests.unit.test_versions
py27 develop-inst-noop: /Users/dims/openstack/nova
py27 runtests: PYTHONHASHSEED='0'
py27 runtests: commands[0] | find . -type f -name *.pyc -delete
py27 runtests: commands[1] | bash tools/pretty_tox.sh nova.tests.unit.test_versions
{1} nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good [0.180036s] ... ok
{0} nova.tests.unit.test_versions.VersionTestCase.test_release_file [0.184115s] ... ok

======
Totals
======
Ran: 2 tests in 13.0000 sec.
 - Passed: 2
 - Skipped: 0
 - Failed: 0
Sum of execute time for each test: 0.3642 sec.

==============
Worker Balance
==============
 - Worker 0 (1 tests) => 0:00:00.184115s
 - Worker 1 (1 tests) => 0:00:00.180036s
________________________________________________________________________________________________________________________ summary _________________________________________________________________________________________________________________________
  py27: commands succeeded
  congratulations 🙂

real	0m14.452s
user	0m16.392s
sys	0m2.354s

Sometimes the usual way is not very helpful, especially when you are working on some new code and say running into issues importing code. Then, here’s what you do.

First, activate the py27 virtualenv

dims@dims-mac:~/openstack/nova$ . .tox/py27/bin/activate

Then use testtools

(py27)dims@dims-mac:~/openstack/nova$ python -m testtools.run nova.tests.unit.test_versions
Tests running...

Ran 2 tests in 0.090s
OK

Or you can install pytest

(py27)dims@dims-mac:~/openstack/nova$ pip install pytest
Collecting pytest
  Downloading pytest-2.6.4.tar.gz (512kB)
    100% |################################| 516kB 877kB/s
Collecting py>=1.4.25 (from pytest)
  Downloading py-1.4.26.tar.gz (190kB)
    100% |################################| 192kB 4.3MB/s
Installing collected packages: py, pytest
  Running setup.py install for py
  Running setup.py install for pytest
    Installing py.test-2.7 script to /Users/dims/openstack/nova/.tox/py27/bin
    Installing py.test script to /Users/dims/openstack/nova/.tox/py27/bin
Successfully installed py-1.4.26 pytest-2.6.4

And then run the same test using py.test

(py27)dims@dims-mac:~/openstack/nova$ find . -name py.test
./.tox/py27/bin/py.test

(py27)dims@dims-mac:~/openstack/nova$ .tox/py27/bin/py.test -svx nova/tests/unit/test_versions.py
================================================================================================================== test session starts ===================================================================================================================
platform darwin -- Python 2.7.8 -- py-1.4.26 -- pytest-2.6.4 -- /Users/dims/openstack/nova/.tox/py27/bin/python2.7
collected 2 items

nova/tests/unit/test_versions.py::VersionTestCase::test_release_file PASSED
nova/tests/unit/test_versions.py::VersionTestCase::test_version_string_with_package_is_good PASSED

================================================================================================================ 2 passed in 1.69 seconds ================================================================================================================

These tips are based on the openstack-dev mailing list discussion:
http://openstack.markmail.org/thread/wetxcnhuq6b7auhn

June 29, 2014

CFv2 Deployment on latest DevStack using MicroBOSH

Filed under: Uncategorized — Davanum Srinivas @ 9:32 pm

As promised, here’s a follow up to the 2 previous posts:

Let’s now deploy a full CFv2 instance using microbosh, instructions are from here:
Install Cloud Foundry on OpenStack

Here’s the edited flavor(s) that i used:
bosh-cfv2-flavors

Here’s the micro_bosh.yml for completeness:


name: microbosh-openstack
logging:
level: DEBUG
network:
type: dynamic
vip: 172.24.4.1
resources:
persistent_disk: 16384
cloud_properties:
instance_type: m1.medium
cloud:
plugin: openstack
properties:
openstack:
auth_url: http://MY_HOST_IP:5000/v2.0
username: admin
api_key: passw0rd
tenant: admin
default_security_groups: ["ssh", "bosh"]
default_key_name: microbosh
private_key: /opt/stack/bosh-workspace/microbosh.pem
apply_spec:
properties:
nats:
ping_interval: 30
ping_max_outstanding: 30
director:
max_threads: 3
hm:
resurrector_enabled: true
ntp:
– 0.north-america.pool.ntp.org
– 1.north-america.pool.ntp.org

view raw

micro_bosh.yml

hosted with ❤ by GitHub

Here’s the cf-173-openstack.yml originally from @ferdy with minor tweaks:


<%
director_uuid = 'CHANGEME'
static_ip = 'CHANGEME'
root_domain = "#{static_ip}.xip.io"
deployment_name = 'cf'
cf_release = '173'
protocol = 'http'
common_password = 'c1oudc0wc1oudc0w'
%>
name: <%= deployment_name %>
director_uuid: <%= director_uuid %>
releases:
– name: cf
version: <%= cf_release %>
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
instance_type: m1.large
update:
canaries: 0
canary_watch_time: 30000-600000
update_watch_time: 30000-600000
max_in_flight: 32
serial: false
networks:
– name: default
type: dynamic
cloud_properties:
security_groups:
– default
– bosh
– cf-private
– name: external
type: dynamic
cloud_properties:
security_groups:
– default
– bosh
– cf-public
– name: floating
type: vip
cloud_properties: {}
resource_pools:
– name: common
network: default
size: 14
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid
version: latest
cloud_properties:
instance_type: m1.small
– name: large
network: default
size: 1
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid
version: latest
cloud_properties:
instance_type: m1.medium
jobs:
– name: nats
templates:
– name: nats
– name: nats_stream_forwarder
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: syslog_aggregator
templates:
– name: syslog_aggregator
instances: 1
resource_pool: common
persistent_disk: 65536
networks:
– name: default
default: [dns, gateway]
– name: nfs_server
templates:
– name: debian_nfs_server
instances: 1
resource_pool: common
persistent_disk: 65535
networks:
– name: default
default: [dns, gateway]
– name: postgres
templates:
– name: postgres
instances: 1
resource_pool: common
persistent_disk: 65536
networks:
– name: default
default: [dns, gateway]
properties:
db: databases
– name: uaa
templates:
– name: uaa
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: loggregator
templates:
– name: loggregator
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: trafficcontroller
templates:
– name: loggregator_trafficcontroller
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: cloud_controller
templates:
– name: cloud_controller_ng
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: cloud_controller_worker
templates:
– name: cloud_controller_worker
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: clock_global
templates:
– name: cloud_controller_clock
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: etcd
templates:
– name: etcd
instances: 1
resource_pool: common
persistent_disk: 10024
networks:
– name: default
default: [dns, gateway]
– name: health_manager
templates:
– name: hm9000
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: dea
templates:
– name: dea_logging_agent
– name: dea_next
instances: 1
resource_pool: large
networks:
– name: default
default: [dns, gateway]
– name: router
templates:
– name: gorouter
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
properties:
domain: <%= root_domain %>
system_domain: <%= root_domain %>
system_domain_organization: 'admin'
app_domains:
– <%= root_domain %>
networks:
apps: default
nats:
user: nats
password: <%= common_password %>
address: 0.nats.default.<%= deployment_name %>.microbosh
port: 4222
machines:
– 0.nats.default.<%= deployment_name %>.microbosh
syslog_aggregator:
address: 0.syslog-aggregator.default.<%= deployment_name %>.microbosh
port: 54321
nfs_server:
address: 0.nfs-server.default.<%= deployment_name %>.microbosh
network: "*.<%= deployment_name %>.microbosh"
idmapd_domain: "localdomain"
debian_nfs_server:
no_root_squash: true
loggregator_endpoint:
shared_secret: <%= common_password %>
host: 0.trafficcontroller.default.<%= deployment_name %>.microbosh
loggregator:
servers:
zone:
– 0.loggregator.default.<%= deployment_name %>.microbosh
traffic_controller:
zone: 'zone'
logger_endpoint:
use_ssl: <%= protocol == 'https' %>
port: 80
ssl:
skip_cert_verify: true
router:
prune_stale_droplets_interval: 3000
droplet_stale_threshold: 1200
endpoint_timeout: 60
status:
port: 8080
user: gorouter
password: <%= common_password %>
servers:
z1:
– 0.router.default.<%= deployment_name %>.microbosh
z2: []
etcd:
machines:
– 0.etcd.default.<%= deployment_name %>.microbosh
dea: &dea
disk_mb: 102400
disk_overcommit_factor: 2
memory_mb: 15000
memory_overcommit_factor: 3
directory_server_protocol: <%= protocol %>
mtu: 1460
deny_networks:
– 169.254.0.0/16 # Google Metadata endpoint
dea_next: *dea
disk_quota_enabled: false
dea_logging_agent:
status:
user: admin
password: <%= common_password %>
databases: &databases
db_scheme: postgres
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
– tag: admin
name: ccadmin
password: <%= common_password %>
– tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
– tag: cc
name: ccdb
citext: true
– tag: uaa
name: uaadb
citext: true
ccdb: &ccdb
db_scheme: postgres
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
– tag: admin
name: ccadmin
password: <%= common_password %>
databases:
– tag: cc
name: ccdb
citext: true
ccdb_ng: *ccdb
uaadb:
db_scheme: postgresql
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
– tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
– tag: uaa
name: uaadb
citext: true
cc: &cc
srv_api_uri: <%= protocol %>://api.<%= root_domain %>
jobs:
local:
number_of_workers: 2
generic:
number_of_workers: 2
global:
timeout_in_seconds: 14400
app_bits_packer:
timeout_in_seconds: null
app_events_cleanup:
timeout_in_seconds: null
app_usage_events_cleanup:
timeout_in_seconds: null
blobstore_delete:
timeout_in_seconds: null
blobstore_upload:
timeout_in_seconds: null
droplet_deletion:
timeout_in_seconds: null
droplet_upload:
timeout_in_seconds: null
model_deletion:
timeout_in_seconds: null
bulk_api_password: <%= common_password %>
staging_upload_user: upload
staging_upload_password: <%= common_password %>
quota_definitions:
default:
memory_limit: 10240
total_services: 100
non_basic_services_allowed: true
total_routes: 1000
trial_db_allowed: true
resource_pool:
resource_directory_key: cloudfoundry-resources
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
packages:
app_package_directory_key: cloudfoundry-packages
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
droplets:
droplet_directory_key: cloudfoundry-droplets
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
buildpacks:
buildpack_directory_key: cloudfoundry-buildpacks
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
install_buildpacks:
– name: java_buildpack
package: buildpack_java
– name: ruby_buildpack
package: buildpack_ruby
– name: nodejs_buildpack
package: buildpack_nodejs
– name: go_buildpack
package: buildpack_go
db_encryption_key: <%= common_password %>
hm9000_noop: false
diego: false
newrelic:
license_key: null
environment_name: <%= deployment_name %>
ccng: *cc
login:
enabled: false
uaa:
url: <%= protocol %>://uaa.<%= root_domain %>
no_ssl: <%= protocol == 'http' %>
cc:
client_secret: <%= common_password %>
admin:
client_secret: <%= common_password %>
batch:
username: batch
password: <%= common_password %>
clients:
cf:
override: true
authorized-grant-types: password,implicit,refresh_token
authorities: uaa.none
scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write
access-token-validity: 7200
refresh-token-validity: 1209600
admin:
secret: <%= common_password %>
authorized-grant-types: client_credentials
authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin
scim:
users:
– admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
– services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin
jwt:
signing_key: |
—–BEGIN RSA PRIVATE KEY—–
MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB
AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA
Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0
KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J
duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE
xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8
+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek
lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h
jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh
HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+
4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=
—–END RSA PRIVATE KEY—–
verification_key: |
—–BEGIN PUBLIC KEY—–
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX
qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug
spULZVNRxq7veq/fzwIDAQAB
—–END PUBLIC KEY—–

Things to tweak:
In my cf-173-openstack.yml, i had set static_ip to 172.24.4.10. So once the MicroBOSH finished deploying CFv2, i had to find the correct vm for the router and set its floating ip to 172.24.4.10. You can do this by running “bosh vms”, look for the ip address for “router/0”, then find the correct vm in horizon or using nova list and then set it’s ip address. You need to do this before you try to use the cf command line API. Be sure to download the latest and greatest CLI from https://github.com/cloudfoundry/cli#downloads.

Flaky Stuff:
postgres vm ran into trouble multiple times, i figured out how to stop/start the shell script by hand but since other vms like cloud_controller, clock_global, cloud_controller_worker had issues it was better to whack a big hammer and run “bosh delete deployment cf” and re-instantiate the vms. (yes, i tried combinations of bosh start/recreate/restart commands as well)

Hints:
bosh-lite’s README.md is very helpful about how to build the cf release. Andy’s blogpost helped quite a bit to peel the onion for debugging as did Dr Nic’s posts. Thanks Folks!

June 24, 2014

Deploying BOSH with Micro BOSH on latest DevStack

Filed under: Uncategorized — Davanum Srinivas @ 1:33 pm

Follow up to Running Cloud Foundry’s Micro BOSH On Latest DevStack, I had to bump the VOLUME_BACKING_FILE_SIZE to 200GB in devstack as 100GB was not enough. Instructions from http://docs.cloudfoundry.org/deploying/openstack/deploying_bosh.html were handy as usual. Here’s my bosh-openstack.yml

---
name: bosh-openstack
director_uuid: 80a0b9cc-a7e4-4134-81ee-a186e8bebff8

release:
  name: bosh
  version: latest

compilation:
  workers: 3
  network: default
  reuse_compilation_vms: true
  cloud_properties:
    instance_type: m1.small

update:
  canaries: 1
  canary_watch_time: 3000-120000
  update_watch_time: 3000-120000
  max_in_flight: 4

networks:
  - name: floating
    type: vip
    cloud_properties: {}
  - name: default
    type: dynamic
    cloud_properties: {}

resource_pools:
  - name: common
    network: default
    size: 8
    stemcell:
      name: bosh-openstack-kvm-ubuntu
      version: latest
    cloud_properties:
      instance_type: m1.small

jobs:
  - name: nats
    template: nats
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

  - name: redis
    template: redis
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

  - name: postgres
    template: postgres
    instances: 1
    resource_pool: common
    persistent_disk: 16384
    networks:
      - name: default
        default: [dns, gateway]

  - name: powerdns
    template: powerdns
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
      - name: floating
        static_ips:
          - 172.24.4.2

  - name: blobstore
    template: blobstore
    instances: 1
    resource_pool: common
    persistent_disk: 51200
    networks:
      - name: default
        default: [dns, gateway]

  - name: director
    template: director
    instances: 1
    resource_pool: common
    persistent_disk: 16384
    networks:
      - name: default
        default: [dns, gateway]
      - name: floating
        static_ips:
          - 172.24.4.3

  - name: registry
    template: registry
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

  - name: health_monitor
    template: health_monitor
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

properties:
  nats:
    address: 0.nats.default.bosh-openstack.microbosh
    user: nats
    password: nats

  redis:
    address: 0.redis.default.bosh-openstack.microbosh
    password: redis

  postgres: &bosh_db
    host: 0.postgres.default.bosh-openstack.microbosh
    user: postgres
    password: postgres
    database: bosh

  dns:
    address: 172.24.4.2
    db: *bosh_db
    recursor: 172.24.4.1

  blobstore:
    address: 0.blobstore.default.bosh-openstack.microbosh
    agent:
      user: agent
      password: agent
    director:
      user: director
      password: director

  director:
    name: bosh
    address: 0.director.default.bosh-openstack.microbosh
    db: *bosh_db

  registry:
    address: 0.registry.default.bosh-openstack.microbosh
    db: *bosh_db
    http:
      user: registry
      password: registry

  hm:
    http:
      user: hm
      password: hm
    director_account:
      user: admin
      password: admin
    resurrector_enabled: true

  ntp:
    - 0.north-america.pool.ntp.org
    - 1.north-america.pool.ntp.org

  openstack:
    auth_url: http://173.193.231.50:5000/v2.0
    username: admin
    api_key: passw0rd
    tenant: admin
    region:
    default_security_groups: ["default", "ssh", "bosh"]
    default_key_name: microbosh

Running Cloud Foundry’s Micro BOSH on latest DevStack

Filed under: cloud foundry, openstack — Tags: , , , — Davanum Srinivas @ 7:45 am

The Cloud Foundry docs are excellent. Here’s where i started from:
http://docs.cloudfoundry.org/deploying/openstack/

I provisioned a big beefy bare metal box with Ubuntu 14.04 LTS on SoftLayer and installed DevStack on it as usual. Here’s the super simple local.conf that i used. Note the 100GB volume needed later for Micro BOSH deployment.

[[local|localrc]]
FLAT_INTERFACE=eth0
PUBLIC_INTERFACE=eth1
ADMIN_PASSWORD=passw0rd
MYSQL_PASSWORD=passw0rd
RABBIT_PASSWORD=passw0rd
SERVICE_PASSWORD=passw0rd
VOLUME_BACKING_FILE_SIZE=100GB

Once you deploy DevStack, follow the steps in the CF docs url above, Here’s the ~/.fog file that i used in step #2. Note that i am just using the “admin” credentials and the “admin” tenant for all openstack operations

:openstack:
  :openstack_auth_url:  http://9.193.231.50:5000/v2.0/tokens
  :openstack_api_key:   passw0rd
  :openstack_username:  admin
  :openstack_tenant: admin
  :openstack_region:

Here’s a script to create all the security groups and floating ips needed in step #3.

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

nova secgroup-create ssh ssh
nova secgroup-add-rule ssh udp 68 68 0.0.0.0/0
nova secgroup-add-rule ssh tcp 22 22 0.0.0.0/0
nova secgroup-add-rule ssh icmp -1 -1 0.0.0.0/0

nova secgroup-create bosh bosh
nova secgroup-add-group-rule bosh bosh tcp 1 65535
nova secgroup-add-rule bosh tcp 4222 4222 0.0.0.0/0
nova secgroup-add-rule bosh tcp 6868 6868 0.0.0.0/0
nova secgroup-add-rule bosh tcp 25250 25250 0.0.0.0/0
nova secgroup-add-rule bosh tcp 25555 25555 0.0.0.0/0
nova secgroup-add-rule bosh tcp 25777 25777 0.0.0.0/0
nova secgroup-add-rule bosh tcp 53 53 0.0.0.0/0
nova secgroup-add-rule bosh udp 68 68 0.0.0.0/0
nova secgroup-add-rule bosh udp 53 53 0.0.0.0/0

nova secgroup-create cf-public cf-public
nova secgroup-add-rule cf-public udp 68 68 0.0.0.0/0
nova secgroup-add-rule cf-public tcp 80 80 0.0.0.0/0
nova secgroup-add-rule cf-public tcp 443 443 0.0.0.0/0

nova secgroup-create cf-private cf-private
nova secgroup-add-rule cf-private udp 68 68 0.0.0.0/0
nova secgroup-add-group-rule cf-private cf-private tcp 1 65535

nova floating-ip-create

For Step #4, i used this stem cell – bosh-stemcell-2611-openstack-kvm-ubuntu-lucid.tgz and the following microbosh.yml

---
name: microbosh-openstack

logging:
  level: DEBUG

network:
  type: dynamic
  vip: 172.24.4.1

resources:
  persistent_disk: 16384
  cloud_properties:
    instance_type: m1.small

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://9.193.231.50:5000/v2.0
      username: admin
      api_key: passw0rd
      tenant: admin
      default_security_groups: ["ssh", "bosh"]
      default_key_name: microbosh
      private_key: /opt/stack/bosh-workspace/microbosh.pem

apply_spec:
  properties:
    director:
      max_threads: 3
    hm:
      resurrector_enabled: true
    ntp:
      - 0.north-america.pool.ntp.org
      - 1.north-america.pool.ntp.org

That was it! Final check on status.

stack@bigblue:~/bosh-workspace/deployments$ bosh micro status
Stemcell CID   6203baa8-d64f-4701-952e-a33ea0aabdb0
Stemcell name  bosh-stemcell-2611-openstack-kvm-ubuntu-lucid
VM CID         55fc5d01-e56a-4120-bd8c-6ec1c7d295ea
Disk CID       a9eb53a9-74a1-4210-8afe-d04ba68536ac
Micro BOSH CID bm-c84e8442-016a-499f-aa31-19a8a9c58a9e
Deployment     /opt/stack/bosh-workspace/deployments/microbosh-openstack/micro_bosh.yml
Target         https://172.24.4.1:25555

stack@bigblue:~/bosh-workspace/deployments$ bosh status
Config
             /opt/stack/.bosh_config

Director
  Name       microbosh-openstack
  URL        https://172.24.4.1:25555
  Version    1.2611.0 (00000000)
  User       admin
  UUID       46aa8b77-3f41-4268-952e-37c07f938b86
  CPI        openstack
  dns        enabled (domain_name: microbosh)
  compiled_package_cache disabled
  snapshots  disabled

Deployment
  not set

Next up, Will try steps #5, #6, #7 and report back here.

March 17, 2014

Generating a Bitcoin Private Key and Address

Filed under: bitcoin — Davanum Srinivas @ 11:30 am

Ken Shirriff’s blog post here has an excellent introduction to Bitcoin. One of his code snippets shows a sample python code to generate a private key in WIF format and an address. I tweaked it just a bit to replace usage of python’s random module with os.urandom and stripped it down to just what’s needed to show the exponent, private key and address. Here’s the effort in a gist:


import ecdsa
import ecdsa.der
import ecdsa.util
import hashlib
import os
import re
import struct
b58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
def base58encode(n):
result = ''
while n > 0:
result = b58[n%58] + result
n /= 58
return result
def base256decode(s):
result = 0
for c in s:
result = result * 256 + ord(c)
return result
def countLeadingChars(s, ch):
count = 0
for c in s:
if c == ch:
count += 1
else:
break
return count
# https://en.bitcoin.it/wiki/Base58Check_encoding
def base58CheckEncode(version, payload):
s = chr(version) + payload
checksum = hashlib.sha256(hashlib.sha256(s).digest()).digest()[0:4]
result = s + checksum
leadingZeros = countLeadingChars(result, '\0')
return '1' * leadingZeros + base58encode(base256decode(result))
def privateKeyToWif(key_hex):
return base58CheckEncode(0x80, key_hex.decode('hex'))
def privateKeyToPublicKey(s):
sk = ecdsa.SigningKey.from_string(s.decode('hex'), curve=ecdsa.SECP256k1)
vk = sk.verifying_key
return ('\04' + sk.verifying_key.to_string()).encode('hex')
def pubKeyToAddr(s):
ripemd160 = hashlib.new('ripemd160')
ripemd160.update(hashlib.sha256(s.decode('hex')).digest())
return base58CheckEncode(0, ripemd160.digest())
def keyToAddr(s):
return pubKeyToAddr(privateKeyToPublicKey(s))
# Generate a random private key
private_key = os.urandom(32).encode('hex')
# You can verify the values on http://brainwallet.org/
print "Secret Exponent (Uncompressed) : %s " % private_key
print "Private Key : %s " % privateKeyToWif(private_key)
print "Address : %s " % keyToAddr(private_key)

view raw

keyUtils.py

hosted with ❤ by GitHub

October 17, 2012

Scripts to start/stop OpenStack environment built using DevStack

Filed under: Uncategorized — Davanum Srinivas @ 4:14 pm

Work in progress…Once i bootstrapped an OpenStack install using DevStack i wanted keep/use the environment that was just built. I could not find scripts to start/stop all the services, here’s my effort. If someone has a better way, please let me know!.

#!/bin/bash

rm -rf /var/log/nova/*.log

service mysql start
service rabbitmq-server start

cd /opt/stack/glance/bin
/opt/stack/glance/bin/glance-registry --config-file=/etc/glance/glance-registry.conf > /var/log/nova/glance-registry.log 2>&1 &

cd /opt/stack/glance/bin
/opt/stack/glance/bin/glance-api --config-file=/etc/glance/glance-api.conf > /var/log/nova/glance-api.log 2>&1 &
echo "Waiting for g-api to start..."
if ! timeout 60 sh -c "while ! wget --no-proxy -q -O- http://127.0.0.1:9292;
do sleep 1; done"; then
        echo "g-api did not start"
        exit 1
fi
echo "Done."

cd /opt/stack/keystone/bin
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf --log-config /etc/keystone/logging.conf -d --debug > /var/log/nova/keystone-all.log 2>&1 &
echo "Waiting for keystone to start..."
if ! timeout 60 sh -c "while ! wget --no-proxy -q -O- http://127.0.0.1:5000;
do sleep 1; done"; then
        echo "keystone did not start"
        exit 1
fi
echo "Done."

cd /opt/stack/cinder/bin/
/opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf > /var/log/nova/cinder-api.log 2>&1 &

cd /opt/stack/cinder/bin/
/opt/stack/cinder/bin/cinder-volume --config-file /etc/cinder/cinder.conf > /var/log/nova/cinder-volume.log 2>&1 &

cd /opt/stack/cinder/bin/
/opt/stack/cinder/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf > /var/log/nova/cinder-scheduler.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-api > /var/log/nova/nova-api.log 2>&1 &
echo "Waiting for nova-api to start..."
if ! timeout 60 sh -c "while ! wget --no-proxy -q -O- http://127.0.0.1:8774;
do sleep 1; done"; then
        echo "nova-api did not start"
        exit 1
fi
echo "Done."

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-scheduler > /var/log/nova/nova-scheduler.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-cert > /var/log/nova/nova-cert.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-objectstore > /var/log/nova/nova-objectstore.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-network > /var/log/nova/nova-network.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-compute > /var/log/nova/nova-compute.log 2>&1 &

cd /opt/stack/noVNC
/opt/stack/noVNC/utils/nova-novncproxy --config-file /etc/nova/nova.conf  --web . > /var/log/nova/nova-novncproxy.log 2>&1 &

cd /opt/stack/nova/bin/
/opt/stack/nova/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf > /var/log/nova/nova-xvpvncproxy.log 2>&1 &

cd /opt/stack/nova/bin/
/opt/stack/nova/bin/nova-consoleauth > /var/log/nova/nova-consoleauth.log 2>&1 &

service apache2 start
#!/bin/bash

kill -9 `ps aux | grep -v grep | grep /opt/stack | awk '{print $2}'`

service apache2 stop
service rabbitmq-server stop
service mysql stop
Older Posts »

Blog at WordPress.com.